[jira] [Commented] (FLINK-9294) Improve type inference for UDFs with composite parameter or result type

2018-07-05 Thread Rong Rong (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534424#comment-16534424
 ] 

Rong Rong commented on FLINK-9294:
--

Thanks [~twalthr] for the heads up. Yes I think both type extractor extracts it 
as GenericTypeInfo. But I think we can use a more intelligent method to match 
functions in `UserDefinedFunctionUtils`

However, I dug a little deeper into this. seems like the actual problem was the 
mismatching of Map / Array between Scala and Java. 

For a simple function:

{code:java}
public static class JavaFunc5 extends ScalarFunction {
public String[] eval(Map map) {
return map.keySet().toArray(new String[0]);
}
}
{code}

The following SQL can find function match: `SELECT fun(a) FROM table` when 
table is a java.util.Map class. while it will fail for scala Map
Similar things happens to array as well. Will follow up with that first. 


> Improve type inference for UDFs with composite parameter or result type 
> 
>
> Key: FLINK-9294
> URL: https://issues.apache.org/jira/browse/FLINK-9294
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>
> Most of the UDF function signatures that includes composite types such as 
> *{{MAP}}*, *{{ARRAY}}*, etc would require user to override 
> *{{getParameterType}}* or *{{getResultType}}* method explicitly. 
> It should be able to resolve the composite type based on the function 
> signature, such as:
> {code:java}
> public String[] eval(Map mapArg) { /* ...  */ }
> {code}
> should either 
> 1. Automatically resolve that:
> - *{{ObjectArrayTypeInfo}}* to be the result type.
> - *{{MapTypeInfo}}*  to be the 
> parameter type.
> 2. Improved function mapping to find and locate function with such signatures.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9770) UI jar list broken

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534423#comment-16534423
 ] 

ASF GitHub Bot commented on FLINK-9770:
---

Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6269
  
+1



> UI jar list broken
> --
>
> Key: FLINK-9770
> URL: https://issues.apache.org/jira/browse/FLINK-9770
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.5.1, 1.6.0
>
>
> The jar listing in the UI is broken.
> The {{JarListHandler}} expects a specific naming scheme (_) 
> for uploaded jars, which the {{FileUploadHandler}} previously adhered to.
> When the file uploads were generalized this naming scheme was removed from 
> the {{FileUploadHandler}}, but neither was the {{JarListHandler}} adjusted 
> nor was this behavior re-introduced in the {{JarUploadHandler}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6269: [FLINK-9770][rest] Fix jar listing

2018-07-05 Thread medcv
Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6269
  
+1



---


[jira] [Commented] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534417#comment-16534417
 ] 

ASF GitHub Bot commented on FLINK-5750:
---

Github user hequn8128 commented on the issue:

https://github.com/apache/flink/pull/6267
  
@AlexanderKoltsov Hi, thanks for looking into this problem. The PR looks 
good. I agree with @fhueske that `DataStreamUnion ` should be fixed in this PR. 
Furthermore, I find `DataSetMinus` and `DataSetIntersect` have the same 
problem. It would be great if you can open a new jira to track the problem. 


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> It seems that Calcite only generates non-binary Unions in rare cases 
> ({{(SELECT * FROM t) UNION ALL (SELECT * FROM t) UNION ALL (SELECT * FROM 
> t)}} results in two binary union operators) but the problem definitely needs 
> to be fixed.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6267: [FLINK-5750] Incorrect parse of brackets inside VALUES su...

2018-07-05 Thread hequn8128
Github user hequn8128 commented on the issue:

https://github.com/apache/flink/pull/6267
  
@AlexanderKoltsov Hi, thanks for looking into this problem. The PR looks 
good. I agree with @fhueske that `DataStreamUnion ` should be fixed in this PR. 
Furthermore, I find `DataSetMinus` and `DataSetIntersect` have the same 
problem. It would be great if you can open a new jira to track the problem. 


---


[jira] [Created] (FLINK-9771) "Show Plan" option under Submit New Job in WebUI not working

2018-07-05 Thread Yazdan Shirvany (JIRA)
Yazdan Shirvany created FLINK-9771:
--

 Summary:  "Show Plan" option under Submit New Job in WebUI not 
working 
 Key: FLINK-9771
 URL: https://issues.apache.org/jira/browse/FLINK-9771
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission, Webfrontend
Affects Versions: 1.5.0, 1.5.1, 1.6.0
Reporter: Yazdan Shirvany


{{Show Plan}} button under {{Submit new job}} in WebUI not working.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9769) FileUploads may be shared across requests

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534410#comment-16534410
 ] 

ASF GitHub Bot commented on FLINK-9769:
---

Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6270
  
+1
I rebuilt the 1.5.1 with this changes and now `Upload` jar file from WebUI 
working. 


> FileUploads may be shared across requests
> -
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.5.1, 1.6.0
>
>
> Files uploaded by the {{FileUploadHandler}} are passed on to subsequent 
> handlers by storing them in a channel attribute.
> The files are retrieved from said attribute by the {{AbstractHandler}}.
> Apparently, since the attribute isn't set to null when retrieving the 
> contained value, it can happen that other handlers still see the value, if 
> the channel is shared across several requests. (This behavior is surprising 
> as i thought that each requests has it's own channel.)
> However, the retrieved files will no longer exist for any handler but the 
> original recipient, because he ensures that the files are cleaned up after 
> processing.
> Note that this issue existed for a quite a while, it just didn't surface as 
> only a single handler ever accessed these attributes.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-07-05 21:55:09,485 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> 

[GitHub] flink issue #6270: [FLINK-9769][rest] Clear FileUpload attribute after acces...

2018-07-05 Thread medcv
Github user medcv commented on the issue:

https://github.com/apache/flink/pull/6270
  
+1
I rebuilt the 1.5.1 with this changes and now `Upload` jar file from WebUI 
working. 


---


[jira] [Commented] (FLINK-7151) FLINK SQL support create temporary function and table

2018-07-05 Thread Chunhui Shi (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534400#comment-16534400
 ] 

Chunhui Shi commented on FLINK-7151:


Hi [~suez1224], do you have a Jira for your DDL task?

> FLINK SQL support create temporary function and table
> -
>
> Key: FLINK-7151
> URL: https://issues.apache.org/jira/browse/FLINK-7151
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: yuemeng
>Assignee: Shuyi Chen
>Priority: Major
>
> Based on create temporary function and table.we can register a udf,udaf,udtf 
> use sql:
> {code}
> CREATE TEMPORARY function 'TOPK' AS 
> 'com..aggregate.udaf.distinctUdaf.topk.ITopKUDAF';
> INSERT INTO db_sink SELECT id, TOPK(price, 5, 'DESC') FROM kafka_source GROUP 
> BY id;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8094) Support other types for ExistingField rowtime extractor

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534355#comment-16534355
 ] 

ASF GitHub Bot commented on FLINK-8094:
---

Github user HeartSaVioR commented on the issue:

https://github.com/apache/flink/pull/6253
  
@xccui Yes. Without defining expression with custom reserved keyword, we 
may need to leverage function which would be non-UDF which means we need to add 
the function to default provided function list from either Flink or Calcite.


> Support other types for ExistingField rowtime extractor
> ---
>
> Key: FLINK-8094
> URL: https://issues.apache.org/jira/browse/FLINK-8094
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Xingcan Cui
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Currently, the {{ExistingField}} rowtime extractor only supports {{Long}} and 
> {{Timestamp}} fields. To enable other data types (e.g., {{String}}), we can 
> provide some system extraction functions and allow users to pass some 
> parameters via the constructor of {{ExistingField}}. There's [a simple 
> demo|https://github.com/xccui/flink/commit/afcc5f1a0ad92db08294199e61be5df72c1514f8]
>  which enables the {{String}} type rowtime by adding a UDF {{str2EventTime}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6253: [FLINK-8094][Table API & SQL] Support other types for Exi...

2018-07-05 Thread HeartSaVioR
Github user HeartSaVioR commented on the issue:

https://github.com/apache/flink/pull/6253
  
@xccui Yes. Without defining expression with custom reserved keyword, we 
may need to leverage function which would be non-UDF which means we need to add 
the function to default provided function list from either Flink or Calcite.


---


[jira] [Commented] (FLINK-8094) Support other types for ExistingField rowtime extractor

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534352#comment-16534352
 ] 

ASF GitHub Bot commented on FLINK-8094:
---

Github user xccui commented on the issue:

https://github.com/apache/flink/pull/6253
  
Hi @HeartSaVioR, really sorry for the late reply. The problem that has 
always been confusing me is how to configure the date format. Anyway, 
supporting the standard ISO one is a great first step. Thanks for your efforts 
and thanks @fhueske  for reviewing and merging this! 


> Support other types for ExistingField rowtime extractor
> ---
>
> Key: FLINK-8094
> URL: https://issues.apache.org/jira/browse/FLINK-8094
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Xingcan Cui
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Currently, the {{ExistingField}} rowtime extractor only supports {{Long}} and 
> {{Timestamp}} fields. To enable other data types (e.g., {{String}}), we can 
> provide some system extraction functions and allow users to pass some 
> parameters via the constructor of {{ExistingField}}. There's [a simple 
> demo|https://github.com/xccui/flink/commit/afcc5f1a0ad92db08294199e61be5df72c1514f8]
>  which enables the {{String}} type rowtime by adding a UDF {{str2EventTime}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6253: [FLINK-8094][Table API & SQL] Support other types for Exi...

2018-07-05 Thread xccui
Github user xccui commented on the issue:

https://github.com/apache/flink/pull/6253
  
Hi @HeartSaVioR, really sorry for the late reply. The problem that has 
always been confusing me is how to configure the date format. Anyway, 
supporting the standard ISO one is a great first step. Thanks for your efforts 
and thanks @fhueske  for reviewing and merging this! 


---


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200523824
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/schema/TableSourceSinkTable.scala
 ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.schema
+
+import org.apache.calcite.rel.`type`.{RelDataType, RelDataTypeFactory}
+import org.apache.calcite.schema.Statistic
+import org.apache.calcite.schema.impl.AbstractTable
+
+class TableSourceSinkTable[T1, T2](val tableSourceTableOpt: 
Option[TableSourceTable[T1]],
--- End diff --

Huge +1. My understanding is this will be the overall class to hold a table 
source, sink or both. `TableSourceSinkTable` seems redundant. 


---


[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534329#comment-16534329
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200521222
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -56,23 +58,44 @@ public Environment() {
return tables;
}
 
+   private static TableDescriptor create(String name, Map 
config) {
+   if (!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) 
{
+   throw new SqlClientException("The 'type' attribute of a 
table is missing.");
+   }
+   final String tableType = (String) 
config.get(TableDescriptorValidator.TABLE_TYPE());
+   if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE())) {
+   return new Source(name, 
ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SINK())) {
+   return new Sink(name, ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE_SINK())) {
+   return new SourceSink(name, 
ConfigUtil.normalizeYaml(config));
+   }
+   return null;
+   }
+
public void setTables(List> tables) {
this.tables = new HashMap<>(tables.size());
tables.forEach(config -> {
-   if 
(!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) {
-   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
+   if (!config.containsKey(NAME)) {
+   throw new SqlClientException("The 'name' 
attribute of a table is missing.");
}
-   if 
(config.get(TableDescriptorValidator.TABLE_TYPE()).equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE()))
 {
-   
config.remove(TableDescriptorValidator.TABLE_TYPE());
-   final Source s = Source.create(config);
-   if (this.tables.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate 
source name '" + s + "'.");
-   }
-   this.tables.put(s.getName(), s);
-   } else {
+   final Object name = config.get(NAME);
+   if (name == null || !(name instanceof String) || 
((String) name).length() <= 0) {
+   throw new SqlClientException("Invalid table 
name '" + name + "'.");
+   }
+   final String tableName = (String) name;
+   final Map properties = new 
HashMap<>(config);
+   properties.remove(NAME);
+
+   TableDescriptor tableDescriptor = create(tableName, 
properties);
+   if (null == tableDescriptor) {
throw new SqlClientException(
-   "Invalid table 'type' attribute 
value, only 'source' is supported");
+   "Invalid table 'type' attribute 
value, only 'source' or 'sink' is supported");
+   }
+   if (this.tables.containsKey(tableName)) {
+   throw new SqlClientException("Duplicate table 
name '" + tableName + "'.");
--- End diff --

if only `"source"` and `"sink"` is allowed, should we allow the same name 
but different type. e.g. `{"name": "t1", "type": "source"}` and `{"name": "t1", 
"type": "sink"}` co-exist? this is actually following up with the previous 
comment. I think we just need one, either should work.


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are 

[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534332#comment-16534332
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200524249
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/TableSinkDescriptor.scala
 ---
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors
+
+/**
+  * Common class for all descriptors describing a table sink.
+  */
+abstract class TableSinkDescriptor extends TableDescriptor {
+  override private[flink] def addProperties(properties: 
DescriptorProperties): Unit = {
--- End diff --

+1 Should be able to unify


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534334#comment-16534334
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200524110
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/connector/TableConnectorFactoryService.scala
 ---
@@ -16,57 +16,57 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.sources
+package org.apache.flink.table.connector
 
 import java.util.{ServiceConfigurationError, ServiceLoader}
 
-import org.apache.flink.table.api.{AmbiguousTableSourceException, 
NoMatchingTableSourceException, TableException, ValidationException}
-import 
org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.FormatDescriptorValidator.FORMAT_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.MetadataValidator.METADATA_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.StatisticsValidator.STATISTICS_PROPERTY_VERSION
-import org.apache.flink.table.descriptors._
+import org.apache.flink.table.api._
+import org.apache.flink.table.descriptors.ConnectorDescriptorValidator._
+import org.apache.flink.table.descriptors.FormatDescriptorValidator._
+import org.apache.flink.table.descriptors.MetadataValidator._
+import org.apache.flink.table.descriptors.StatisticsValidator._
+import org.apache.flink.table.descriptors.{DescriptorProperties, 
TableDescriptor, TableDescriptorValidator}
+import org.apache.flink.table.sinks.TableSink
+import org.apache.flink.table.sources.TableSource
 import org.apache.flink.table.util.Logging
 
-import scala.collection.JavaConverters._
-import scala.collection.mutable
+import _root_.scala.collection.JavaConverters._
+import _root_.scala.collection.mutable
 
 /**
-  * Service provider interface for finding suitable table source factories 
for the given properties.
+  * Unified interface to create TableConnectors, e.g. 
[[org.apache.flink.table.sources.TableSource]]
+  * and [[org.apache.flink.table.sinks.TableSink]].
   */
-object TableSourceFactoryService extends Logging {
+class TableConnectorFactoryService[T] extends Logging {
--- End diff --

+1 


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200522491
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/BatchTableEnvironment.scala
 ---
@@ -160,10 +173,34 @@ abstract class BatchTableEnvironment(
   throw new TableException("Same number of field names and types 
required.")
 }
 
-tableSink match {
+val configuredSink = tableSink.configure(fieldNames, fieldTypes)
+registerTableSinkInternal(name, configuredSink)
+  }
+
+  def registerTableSink(name: String, configuredSink: TableSink[_]): Unit 
= {
--- End diff --

could probably move this to based class `TableEnvironment` ?


---


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200524149
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/connector/TableConnectorFactoryService.scala
 ---
@@ -16,57 +16,57 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.sources
+package org.apache.flink.table.connector
 
 import java.util.{ServiceConfigurationError, ServiceLoader}
 
-import org.apache.flink.table.api.{AmbiguousTableSourceException, 
NoMatchingTableSourceException, TableException, ValidationException}
-import 
org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.FormatDescriptorValidator.FORMAT_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.MetadataValidator.METADATA_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.StatisticsValidator.STATISTICS_PROPERTY_VERSION
-import org.apache.flink.table.descriptors._
+import org.apache.flink.table.api._
+import org.apache.flink.table.descriptors.ConnectorDescriptorValidator._
+import org.apache.flink.table.descriptors.FormatDescriptorValidator._
+import org.apache.flink.table.descriptors.MetadataValidator._
+import org.apache.flink.table.descriptors.StatisticsValidator._
+import org.apache.flink.table.descriptors.{DescriptorProperties, 
TableDescriptor, TableDescriptorValidator}
+import org.apache.flink.table.sinks.TableSink
+import org.apache.flink.table.sources.TableSource
 import org.apache.flink.table.util.Logging
 
-import scala.collection.JavaConverters._
-import scala.collection.mutable
+import _root_.scala.collection.JavaConverters._
+import _root_.scala.collection.mutable
 
 /**
-  * Service provider interface for finding suitable table source factories 
for the given properties.
+  * Unified interface to create TableConnectors, e.g. 
[[org.apache.flink.table.sources.TableSource]]
+  * and [[org.apache.flink.table.sinks.TableSink]].
   */
-object TableSourceFactoryService extends Logging {
+class TableConnectorFactoryService[T] extends Logging {
--- End diff --

also just `TableFactoryService` ?


---


[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534328#comment-16534328
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200521017
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -56,23 +58,44 @@ public Environment() {
return tables;
}
 
+   private static TableDescriptor create(String name, Map 
config) {
+   if (!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) 
{
+   throw new SqlClientException("The 'type' attribute of a 
table is missing.");
+   }
+   final String tableType = (String) 
config.get(TableDescriptorValidator.TABLE_TYPE());
+   if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE())) {
+   return new Source(name, 
ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SINK())) {
+   return new Sink(name, ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE_SINK())) {
+   return new SourceSink(name, 
ConfigUtil.normalizeYaml(config));
+   }
+   return null;
+   }
+
public void setTables(List> tables) {
this.tables = new HashMap<>(tables.size());
tables.forEach(config -> {
-   if 
(!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) {
-   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
+   if (!config.containsKey(NAME)) {
+   throw new SqlClientException("The 'name' 
attribute of a table is missing.");
}
-   if 
(config.get(TableDescriptorValidator.TABLE_TYPE()).equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE()))
 {
-   
config.remove(TableDescriptorValidator.TABLE_TYPE());
-   final Source s = Source.create(config);
-   if (this.tables.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate 
source name '" + s + "'.");
-   }
-   this.tables.put(s.getName(), s);
-   } else {
+   final Object name = config.get(NAME);
+   if (name == null || !(name instanceof String) || 
((String) name).length() <= 0) {
+   throw new SqlClientException("Invalid table 
name '" + name + "'.");
+   }
+   final String tableName = (String) name;
+   final Map properties = new 
HashMap<>(config);
+   properties.remove(NAME);
+
+   TableDescriptor tableDescriptor = create(tableName, 
properties);
+   if (null == tableDescriptor) {
throw new SqlClientException(
-   "Invalid table 'type' attribute 
value, only 'source' is supported");
+   "Invalid table 'type' attribute 
value, only 'source' or 'sink' is supported");
--- End diff --

missing `both` ?


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200521017
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -56,23 +58,44 @@ public Environment() {
return tables;
}
 
+   private static TableDescriptor create(String name, Map 
config) {
+   if (!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) 
{
+   throw new SqlClientException("The 'type' attribute of a 
table is missing.");
+   }
+   final String tableType = (String) 
config.get(TableDescriptorValidator.TABLE_TYPE());
+   if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE())) {
+   return new Source(name, 
ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SINK())) {
+   return new Sink(name, ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE_SINK())) {
+   return new SourceSink(name, 
ConfigUtil.normalizeYaml(config));
+   }
+   return null;
+   }
+
public void setTables(List> tables) {
this.tables = new HashMap<>(tables.size());
tables.forEach(config -> {
-   if 
(!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) {
-   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
+   if (!config.containsKey(NAME)) {
+   throw new SqlClientException("The 'name' 
attribute of a table is missing.");
}
-   if 
(config.get(TableDescriptorValidator.TABLE_TYPE()).equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE()))
 {
-   
config.remove(TableDescriptorValidator.TABLE_TYPE());
-   final Source s = Source.create(config);
-   if (this.tables.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate 
source name '" + s + "'.");
-   }
-   this.tables.put(s.getName(), s);
-   } else {
+   final Object name = config.get(NAME);
+   if (name == null || !(name instanceof String) || 
((String) name).length() <= 0) {
+   throw new SqlClientException("Invalid table 
name '" + name + "'.");
+   }
+   final String tableName = (String) name;
+   final Map properties = new 
HashMap<>(config);
+   properties.remove(NAME);
+
+   TableDescriptor tableDescriptor = create(tableName, 
properties);
+   if (null == tableDescriptor) {
throw new SqlClientException(
-   "Invalid table 'type' attribute 
value, only 'source' is supported");
+   "Invalid table 'type' attribute 
value, only 'source' or 'sink' is supported");
--- End diff --

missing `both` ?


---


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200524249
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/descriptors/TableSinkDescriptor.scala
 ---
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.descriptors
+
+/**
+  * Common class for all descriptors describing a table sink.
+  */
+abstract class TableSinkDescriptor extends TableDescriptor {
+  override private[flink] def addProperties(properties: 
DescriptorProperties): Unit = {
--- End diff --

+1 Should be able to unify


---


[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534330#comment-16534330
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200522491
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/BatchTableEnvironment.scala
 ---
@@ -160,10 +173,34 @@ abstract class BatchTableEnvironment(
   throw new TableException("Same number of field names and types 
required.")
 }
 
-tableSink match {
+val configuredSink = tableSink.configure(fieldNames, fieldTypes)
+registerTableSinkInternal(name, configuredSink)
+  }
+
+  def registerTableSink(name: String, configuredSink: TableSink[_]): Unit 
= {
--- End diff --

could probably move this to based class `TableEnvironment` ?


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200524110
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/connector/TableConnectorFactoryService.scala
 ---
@@ -16,57 +16,57 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.sources
+package org.apache.flink.table.connector
 
 import java.util.{ServiceConfigurationError, ServiceLoader}
 
-import org.apache.flink.table.api.{AmbiguousTableSourceException, 
NoMatchingTableSourceException, TableException, ValidationException}
-import 
org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.FormatDescriptorValidator.FORMAT_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.MetadataValidator.METADATA_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.StatisticsValidator.STATISTICS_PROPERTY_VERSION
-import org.apache.flink.table.descriptors._
+import org.apache.flink.table.api._
+import org.apache.flink.table.descriptors.ConnectorDescriptorValidator._
+import org.apache.flink.table.descriptors.FormatDescriptorValidator._
+import org.apache.flink.table.descriptors.MetadataValidator._
+import org.apache.flink.table.descriptors.StatisticsValidator._
+import org.apache.flink.table.descriptors.{DescriptorProperties, 
TableDescriptor, TableDescriptorValidator}
+import org.apache.flink.table.sinks.TableSink
+import org.apache.flink.table.sources.TableSource
 import org.apache.flink.table.util.Logging
 
-import scala.collection.JavaConverters._
-import scala.collection.mutable
+import _root_.scala.collection.JavaConverters._
+import _root_.scala.collection.mutable
 
 /**
-  * Service provider interface for finding suitable table source factories 
for the given properties.
+  * Unified interface to create TableConnectors, e.g. 
[[org.apache.flink.table.sources.TableSource]]
+  * and [[org.apache.flink.table.sinks.TableSink]].
   */
-object TableSourceFactoryService extends Logging {
+class TableConnectorFactoryService[T] extends Logging {
--- End diff --

+1 


---


[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534333#comment-16534333
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200523824
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/schema/TableSourceSinkTable.scala
 ---
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.plan.schema
+
+import org.apache.calcite.rel.`type`.{RelDataType, RelDataTypeFactory}
+import org.apache.calcite.schema.Statistic
+import org.apache.calcite.schema.impl.AbstractTable
+
+class TableSourceSinkTable[T1, T2](val tableSourceTableOpt: 
Option[TableSourceTable[T1]],
--- End diff --

Huge +1. My understanding is this will be the overall class to hold a table 
source, sink or both. `TableSourceSinkTable` seems redundant. 


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534331#comment-16534331
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200524149
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/connector/TableConnectorFactoryService.scala
 ---
@@ -16,57 +16,57 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.sources
+package org.apache.flink.table.connector
 
 import java.util.{ServiceConfigurationError, ServiceLoader}
 
-import org.apache.flink.table.api.{AmbiguousTableSourceException, 
NoMatchingTableSourceException, TableException, ValidationException}
-import 
org.apache.flink.table.descriptors.ConnectorDescriptorValidator.CONNECTOR_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.FormatDescriptorValidator.FORMAT_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.MetadataValidator.METADATA_PROPERTY_VERSION
-import 
org.apache.flink.table.descriptors.StatisticsValidator.STATISTICS_PROPERTY_VERSION
-import org.apache.flink.table.descriptors._
+import org.apache.flink.table.api._
+import org.apache.flink.table.descriptors.ConnectorDescriptorValidator._
+import org.apache.flink.table.descriptors.FormatDescriptorValidator._
+import org.apache.flink.table.descriptors.MetadataValidator._
+import org.apache.flink.table.descriptors.StatisticsValidator._
+import org.apache.flink.table.descriptors.{DescriptorProperties, 
TableDescriptor, TableDescriptorValidator}
+import org.apache.flink.table.sinks.TableSink
+import org.apache.flink.table.sources.TableSource
 import org.apache.flink.table.util.Logging
 
-import scala.collection.JavaConverters._
-import scala.collection.mutable
+import _root_.scala.collection.JavaConverters._
+import _root_.scala.collection.mutable
 
 /**
-  * Service provider interface for finding suitable table source factories 
for the given properties.
+  * Unified interface to create TableConnectors, e.g. 
[[org.apache.flink.table.sources.TableSource]]
+  * and [[org.apache.flink.table.sinks.TableSink]].
   */
-object TableSourceFactoryService extends Logging {
+class TableConnectorFactoryService[T] extends Logging {
--- End diff --

also just `TableFactoryService` ?


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8866) Create unified interfaces to configure and instatiate TableSinks

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534327#comment-16534327
 ] 

ASF GitHub Bot commented on FLINK-8866:
---

Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200523261
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/connector/TableConnectorFactory.scala
 ---
@@ -16,21 +16,18 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.sources
+package org.apache.flink.table.connector
 
 import java.util
 
-/**
-  * A factory to create a [[TableSource]]. This factory is used with 
Java's Service Provider
-  * Interfaces (SPI) for discovering. A factory is called with a set of 
normalized properties that
-  * describe the desired table source. The factory allows for matching to 
the given set of
-  * properties and creating a configured [[TableSource]] accordingly.
-  *
-  * Classes that implement this interface need to be added to the
-  * "META_INF/services/org.apache.flink.table.sources.TableSourceFactory' 
file of a JAR file in
-  * the current classpath to be found.
-  */
-trait TableSourceFactory[T] {
+trait TableConnectorFactory[T] {
--- End diff --

+1 I think the most baffling point I have read until this point was the 
`Table*Connector*Factory` part :-)


> Create unified interfaces to configure and instatiate TableSinks
> 
>
> Key: FLINK-8866
> URL: https://issues.apache.org/jira/browse/FLINK-8866
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API  SQL
>Reporter: Timo Walther
>Assignee: Shuyi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Similar to the efforts done in FLINK-8240. We need unified ways to configure 
> and instantiate TableSinks. Among other applications, this is necessary in 
> order to declare table sinks in an environment file of the SQL client. Such 
> that the sink can be used for {{INSERT INTO}} statements.
> Below are a few major changes in mind. 
> 1) Add TableSinkFactory/TableSinkFactoryService similar to 
> TableSourceFactory/TableSourceFactoryService
> 2) Add a common property called "type" with values (source, sink and both) 
> for both TableSource and TableSink.
> 3) in yaml file, replace "sources" with "tables", and use tableType to 
> identify whether it's source or sink.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200523261
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/connector/TableConnectorFactory.scala
 ---
@@ -16,21 +16,18 @@
  * limitations under the License.
  */
 
-package org.apache.flink.table.sources
+package org.apache.flink.table.connector
 
 import java.util
 
-/**
-  * A factory to create a [[TableSource]]. This factory is used with 
Java's Service Provider
-  * Interfaces (SPI) for discovering. A factory is called with a set of 
normalized properties that
-  * describe the desired table source. The factory allows for matching to 
the given set of
-  * properties and creating a configured [[TableSource]] accordingly.
-  *
-  * Classes that implement this interface need to be added to the
-  * "META_INF/services/org.apache.flink.table.sources.TableSourceFactory' 
file of a JAR file in
-  * the current classpath to be found.
-  */
-trait TableSourceFactory[T] {
+trait TableConnectorFactory[T] {
--- End diff --

+1 I think the most baffling point I have read until this point was the 
`Table*Connector*Factory` part :-)


---


[GitHub] flink pull request #6201: [FLINK-8866][Table API & SQL] Add support for unif...

2018-07-05 Thread walterddr
Github user walterddr commented on a diff in the pull request:

https://github.com/apache/flink/pull/6201#discussion_r200521222
  
--- Diff: 
flink-libraries/flink-sql-client/src/main/java/org/apache/flink/table/client/config/Environment.java
 ---
@@ -56,23 +58,44 @@ public Environment() {
return tables;
}
 
+   private static TableDescriptor create(String name, Map 
config) {
+   if (!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) 
{
+   throw new SqlClientException("The 'type' attribute of a 
table is missing.");
+   }
+   final String tableType = (String) 
config.get(TableDescriptorValidator.TABLE_TYPE());
+   if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE())) {
+   return new Source(name, 
ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SINK())) {
+   return new Sink(name, ConfigUtil.normalizeYaml(config));
+   } else if 
(tableType.equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE_SINK())) {
+   return new SourceSink(name, 
ConfigUtil.normalizeYaml(config));
+   }
+   return null;
+   }
+
public void setTables(List> tables) {
this.tables = new HashMap<>(tables.size());
tables.forEach(config -> {
-   if 
(!config.containsKey(TableDescriptorValidator.TABLE_TYPE())) {
-   throw new SqlClientException("The 'type' 
attribute of a table is missing.");
+   if (!config.containsKey(NAME)) {
+   throw new SqlClientException("The 'name' 
attribute of a table is missing.");
}
-   if 
(config.get(TableDescriptorValidator.TABLE_TYPE()).equals(TableDescriptorValidator.TABLE_TYPE_VALUE_SOURCE()))
 {
-   
config.remove(TableDescriptorValidator.TABLE_TYPE());
-   final Source s = Source.create(config);
-   if (this.tables.containsKey(s.getName())) {
-   throw new SqlClientException("Duplicate 
source name '" + s + "'.");
-   }
-   this.tables.put(s.getName(), s);
-   } else {
+   final Object name = config.get(NAME);
+   if (name == null || !(name instanceof String) || 
((String) name).length() <= 0) {
+   throw new SqlClientException("Invalid table 
name '" + name + "'.");
+   }
+   final String tableName = (String) name;
+   final Map properties = new 
HashMap<>(config);
+   properties.remove(NAME);
+
+   TableDescriptor tableDescriptor = create(tableName, 
properties);
+   if (null == tableDescriptor) {
throw new SqlClientException(
-   "Invalid table 'type' attribute 
value, only 'source' is supported");
+   "Invalid table 'type' attribute 
value, only 'source' or 'sink' is supported");
+   }
+   if (this.tables.containsKey(tableName)) {
+   throw new SqlClientException("Duplicate table 
name '" + tableName + "'.");
--- End diff --

if only `"source"` and `"sink"` is allowed, should we allow the same name 
but different type. e.g. `{"name": "t1", "type": "source"}` and `{"name": "t1", 
"type": "sink"}` co-exist? this is actually following up with the previous 
comment. I think we just need one, either should work.


---


[jira] [Commented] (FLINK-9730) avoid access static via class reference

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534320#comment-16534320
 ] 

ASF GitHub Bot commented on FLINK-9730:
---

Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/6247
  
@nekrassov, you are strict and right. 
when I revert, the communication between us was removed by github.


> avoid access static via class reference
> ---
>
> Key: FLINK-9730
> URL: https://issues.apache.org/jira/browse/FLINK-9730
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.5.0
>Reporter: lamber-ken
>Assignee: lamber-ken
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> [code refactor] access static via class reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6247: [FLINK-9730] [code refactor] fix access static via class ...

2018-07-05 Thread lamber-ken
Github user lamber-ken commented on the issue:

https://github.com/apache/flink/pull/6247
  
@nekrassov, you are strict and right. 
when I revert, the communication between us was removed by github.


---


[jira] [Commented] (FLINK-8094) Support other types for ExistingField rowtime extractor

2018-07-05 Thread Jungtaek Lim (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534318#comment-16534318
 ] 

Jungtaek Lim commented on FLINK-8094:
-

[~fhueske]

Thanks for reviewing and merging! I'll try to see if I can work on follow-up 
tasks in PR. Thanks again.

> Support other types for ExistingField rowtime extractor
> ---
>
> Key: FLINK-8094
> URL: https://issues.apache.org/jira/browse/FLINK-8094
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Xingcan Cui
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Currently, the {{ExistingField}} rowtime extractor only supports {{Long}} and 
> {{Timestamp}} fields. To enable other data types (e.g., {{String}}), we can 
> provide some system extraction functions and allow users to pass some 
> parameters via the constructor of {{ExistingField}}. There's [a simple 
> demo|https://github.com/xccui/flink/commit/afcc5f1a0ad92db08294199e61be5df72c1514f8]
>  which enables the {{String}} type rowtime by adding a UDF {{str2EventTime}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9766) Incomplete/incorrect cleanup in RemoteInputChannelTest

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534243#comment-16534243
 ] 

ASF GitHub Bot commented on FLINK-9766:
---

GitHub user NicoK opened a pull request:

https://github.com/apache/flink/pull/6271

[FLINK-9766][network][tests] fix cleanup in RemoteInputChannelTest

## What is the purpose of the change

If an assertion in the tests of `RemoteInputChannelTest` fails and as a 
result the cleanup fails, in most tests the original assertion was swallowed 
making it hard to debug.

Furthermore, `#testConcurrentRecycleAndRelease2()` does even not clean up 
at all
if successful.

## Brief change log

- add a helper method to unify (correct) cleanup so that if an exception is 
thrown in the `finally` block, it will be added as a suppressed exception

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): **no**
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
  - The serializers: **no**
  - The runtime per-record code paths (performance sensitive): **no**
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
  - The S3 file system connector: **no**

## Documentation

  - Does this pull request introduce a new feature? **no**
  - If yes, how is the feature documented? **not applicable**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NicoK/flink flink-9766

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6271.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6271


commit 0b623b66399915d43f29245da148fed63bf940bf
Author: Nico Kruber 
Date:   2018-07-05T13:49:15Z

[FLINK-9766][network][tests] fix cleanup in RemoteInputChannelTest

If an assertion in the test fails and as a result the cleanup fails, in most
tests the original assertion was swallowed making it hard to debug.

Furthermore, #testConcurrentRecycleAndRelease2() does even not clean up at 
all
if successful.




> Incomplete/incorrect cleanup in RemoteInputChannelTest
> --
>
> Key: FLINK-9766
> URL: https://issues.apache.org/jira/browse/FLINK-9766
> Project: Flink
>  Issue Type: Bug
>  Components: Network, Tests
>Affects Versions: 1.4.0, 1.5.0, 1.4.1, 1.4.2, 1.5.1
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.2, 1.6.0
>
>
> If an assertion in the tests fails and as a result the cleanup code wrapped 
> into a {{finally}} block also fails, in most tests the original assertion was 
> swallowed making it hard to debug
> in the successful case.
> Furthermore, {{testConcurrentRecycleAndRelease2()}} does even not clean up at 
> all if successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9766) Incomplete/incorrect cleanup in RemoteInputChannelTest

2018-07-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9766:
--
Labels: pull-request-available  (was: )

> Incomplete/incorrect cleanup in RemoteInputChannelTest
> --
>
> Key: FLINK-9766
> URL: https://issues.apache.org/jira/browse/FLINK-9766
> Project: Flink
>  Issue Type: Bug
>  Components: Network, Tests
>Affects Versions: 1.4.0, 1.5.0, 1.4.1, 1.4.2, 1.5.1
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.5.2, 1.6.0
>
>
> If an assertion in the tests fails and as a result the cleanup code wrapped 
> into a {{finally}} block also fails, in most tests the original assertion was 
> swallowed making it hard to debug
> in the successful case.
> Furthermore, {{testConcurrentRecycleAndRelease2()}} does even not clean up at 
> all if successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6271: [FLINK-9766][network][tests] fix cleanup in Remote...

2018-07-05 Thread NicoK
GitHub user NicoK opened a pull request:

https://github.com/apache/flink/pull/6271

[FLINK-9766][network][tests] fix cleanup in RemoteInputChannelTest

## What is the purpose of the change

If an assertion in the tests of `RemoteInputChannelTest` fails and as a 
result the cleanup fails, in most tests the original assertion was swallowed 
making it hard to debug.

Furthermore, `#testConcurrentRecycleAndRelease2()` does even not clean up 
at all
if successful.

## Brief change log

- add a helper method to unify (correct) cleanup so that if an exception is 
thrown in the `finally` block, it will be added as a suppressed exception

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): **no**
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
  - The serializers: **no**
  - The runtime per-record code paths (performance sensitive): **no**
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
  - The S3 file system connector: **no**

## Documentation

  - Does this pull request introduce a new feature? **no**
  - If yes, how is the feature documented? **not applicable**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/NicoK/flink flink-9766

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6271.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6271


commit 0b623b66399915d43f29245da148fed63bf940bf
Author: Nico Kruber 
Date:   2018-07-05T13:49:15Z

[FLINK-9766][network][tests] fix cleanup in RemoteInputChannelTest

If an assertion in the test fails and as a result the cleanup fails, in most
tests the original assertion was swallowed making it hard to debug.

Furthermore, #testConcurrentRecycleAndRelease2() does even not clean up at 
all
if successful.




---


[GitHub] flink pull request #6270: [FLINK-9769][rest] Clear FileUpload attribute afte...

2018-07-05 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/6270

[FLINK-9769][rest] Clear FileUpload attribute after access

## What is the purpose of the change

Prevents a resource leakage by clearing the `UPLOADED_FILES` attribute 
after accessing it.
Previously it could happen that a handler might see the result of a file 
upload operation if it happened to be sharing the same channel. In this case 
said files were already cleaned up, leading to `FileNotFoundExceptions`.

## Verifying this change

Manually verified. Submit a job via the Web UI, no exception should be 
logged.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9769

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6270.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6270


commit 56af2054970dc3282b28dfc8cac4b0a4142bf1ab
Author: zentol 
Date:   2018-07-05T21:28:47Z

[FLINK-9769][rest] Clear FileUpload attribute after access




---


[jira] [Updated] (FLINK-9769) FileUploads may be shared across requests

2018-07-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9769:
--
Labels: pull-request-available  (was: )

> FileUploads may be shared across requests
> -
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.5.1, 1.6.0
>
>
> Files uploaded by the {{FileUploadHandler}} are passed on to subsequent 
> handlers by storing them in a channel attribute.
> The files are retrieved from said attribute by the {{AbstractHandler}}.
> Apparently, since the attribute isn't set to null when retrieving the 
> contained value, it can happen that other handlers still see the value, if 
> the channel is shared across several requests. (This behavior is surprising 
> as i thought that each requests has it's own channel.)
> However, the retrieved files will no longer exist for any handler but the 
> original recipient, because he ensures that the files are cleaned up after 
> processing.
> Note that this issue existed for a quite a while, it just didn't surface as 
> only a single handler ever accessed these attributes.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-07-05 21:55:09,485 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at 

[jira] [Commented] (FLINK-9769) FileUploads may be shared across requests

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534185#comment-16534185
 ] 

ASF GitHub Bot commented on FLINK-9769:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/6270

[FLINK-9769][rest] Clear FileUpload attribute after access

## What is the purpose of the change

Prevents a resource leakage by clearing the `UPLOADED_FILES` attribute 
after accessing it.
Previously it could happen that a handler might see the result of a file 
upload operation if it happened to be sharing the same channel. In this case 
said files were already cleaned up, leading to `FileNotFoundExceptions`.

## Verifying this change

Manually verified. Submit a job via the Web UI, no exception should be 
logged.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9769

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6270.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6270


commit 56af2054970dc3282b28dfc8cac4b0a4142bf1ab
Author: zentol 
Date:   2018-07-05T21:28:47Z

[FLINK-9769][rest] Clear FileUpload attribute after access




> FileUploads may be shared across requests
> -
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.5.1, 1.6.0
>
>
> Files uploaded by the {{FileUploadHandler}} are passed on to subsequent 
> handlers by storing them in a channel attribute.
> The files are retrieved from said attribute by the {{AbstractHandler}}.
> Apparently, since the attribute isn't set to null when retrieving the 
> contained value, it can happen that other handlers still see the value, if 
> the channel is shared across several requests. (This behavior is surprising 
> as i thought that each requests has it's own channel.)
> However, the retrieved files will no longer exist for any handler but the 
> original recipient, because he ensures that the files are cleaned up after 
> processing.
> Note that this issue existed for a quite a while, it just didn't surface as 
> only a single handler ever accessed these attributes.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> 

[jira] [Closed] (FLINK-9756) Exceptions in BufferListener#notifyBufferAvailable do not trigger further listeners in LocalBufferPool#recycle()

2018-07-05 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber closed FLINK-9756.
--
   Resolution: Invalid
Fix Version/s: (was: 1.5.2)
   (was: 1.6.0)

Actually, this only happens to the {{BufferListener#notifyBufferAvailable()}} 
implementation of  {{RemoteInputChannel}} which does not recycle the given 
{{Buffer}} in case of errors. Let's solve this with FLINK-9755.
The implementation in {{PartitionRequestClientHandler.BufferListenerTask}} 
already recycles the buffer and therefore gets back into 
{{LocalBufferPool#recycle()}}.

> Exceptions in BufferListener#notifyBufferAvailable do not trigger further 
> listeners in LocalBufferPool#recycle()
> 
>
> Key: FLINK-9756
> URL: https://issues.apache.org/jira/browse/FLINK-9756
> Project: Flink
>  Issue Type: Bug
>  Components: Network
>Affects Versions: 1.5.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>
> Any {{Exception}} thrown in {{BufferListener#notifyBufferAvailable}} will 
> currently not trigger calling further listeners in 
> {{LocalBufferPool#recycle()}} and only add the given memory segment to the 
> queue of available ones.
> Usually this will not be the last call to {{recycle()}} and future calls may 
> call the listeners but this also introduces further delay in configurations 
> with tight numbers of buffers in the local pool and listeners waiting on them 
> when one task's listener was failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9769) FileUploads may be shared across requests

2018-07-05 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-9769:

Description: 
Files uploaded by the {{FileUploadHandler}} are passed on to subsequent 
handlers by storing them in a channel attribute.
The files are retrieved from said attribute by the {{AbstractHandler}}.

Apparently, since the attribute isn't set to null when retrieving the contained 
value, it can happen that other handlers still see the value, if the channel is 
shared across several requests. (This behavior is surprising as i thought that 
each requests has it's own channel.)
However, the retrieved files will no longer exist for any handler but the 
original recipient, because he ensures that the files are cleaned up after 
processing.

Note that this issue existed for a quite a while, it just didn't surface as 
only a single handler ever accessed these attributes.

{code}

2018-07-05 21:55:09,297 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
2018-07-05 21:55:09,485 ERROR 
org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 

[jira] [Updated] (FLINK-9769) FileUploads may be shared across requests

2018-07-05 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-9769:

Description: 
Files uploaded by the {{FileUploadHandler}} are passed on to subsequent 
handlers by storing them in a channel attribute.
The files are retrieved from said attribute by the {{AbstractHandler}}.

Apparently, since the attribute isn't set to null when retrieving the contained 
value, it can happen that other handlers still see the value, if the channel is 
shared across several requests. (This behavior is surprising as i thought that 
each requests has it's own channel.)
However, the retrieved files will no longer exist for any handler but the 
original recipient, because he ensures that the files are cleaned up after 
processing.

{code}

2018-07-05 21:55:09,297 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
2018-07-05 21:55:09,485 ERROR 
org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
  

[jira] [Updated] (FLINK-9769) FileUploads may be shared across requests

2018-07-05 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-9769:

Summary: FileUploads may be shared across requests  (was: Job submission 
via WebUI broken)

> FileUploads may be shared across requests
> -
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.1, 1.6.0
>
>
> The rework of the {{FileUploadHandler}} apparently broke the Web UI job 
> submission.
> It would be great if someone could check whether this also occurs on 1.6.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-07-05 21:55:09,485 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> 

[jira] [Updated] (FLINK-9770) UI jar list broken

2018-07-05 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-9770:
--
Labels: pull-request-available  (was: )

> UI jar list broken
> --
>
> Key: FLINK-9770
> URL: https://issues.apache.org/jira/browse/FLINK-9770
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.5.1, 1.6.0
>
>
> The jar listing in the UI is broken.
> The {{JarListHandler}} expects a specific naming scheme (_) 
> for uploaded jars, which the {{FileUploadHandler}} previously adhered to.
> When the file uploads were generalized this naming scheme was removed from 
> the {{FileUploadHandler}}, but neither was the {{JarListHandler}} adjusted 
> nor was this behavior re-introduced in the {{JarUploadHandler}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9770) UI jar list broken

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534149#comment-16534149
 ] 

ASF GitHub Bot commented on FLINK-9770:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/6269

[FLINK-9770][rest] Fix jar listing

## What is the purpose of the change

Ensures that uploaded jars adhere to the naming scheme that the 
`JarListHandler` expects.

## Verifying this change

Manually verified.

* upload any jar
* refresh page
* jar should show up

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9770

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6269.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6269


commit 675d3a953c177cb3a140176062b112d9d9d9e2ed
Author: zentol 
Date:   2018-07-05T20:57:02Z

[FLINK-9770][rest] Fix jar listing




> UI jar list broken
> --
>
> Key: FLINK-9770
> URL: https://issues.apache.org/jira/browse/FLINK-9770
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.5.1, 1.6.0
>
>
> The jar listing in the UI is broken.
> The {{JarListHandler}} expects a specific naming scheme (_) 
> for uploaded jars, which the {{FileUploadHandler}} previously adhered to.
> When the file uploads were generalized this naming scheme was removed from 
> the {{FileUploadHandler}}, but neither was the {{JarListHandler}} adjusted 
> nor was this behavior re-introduced in the {{JarUploadHandler}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6269: [FLINK-9770][rest] Fix jar listing

2018-07-05 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/6269

[FLINK-9770][rest] Fix jar listing

## What is the purpose of the change

Ensures that uploaded jars adhere to the naming scheme that the 
`JarListHandler` expects.

## Verifying this change

Manually verified.

* upload any jar
* refresh page
* jar should show up

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 9770

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/6269.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #6269


commit 675d3a953c177cb3a140176062b112d9d9d9e2ed
Author: zentol 
Date:   2018-07-05T20:57:02Z

[FLINK-9770][rest] Fix jar listing




---


[jira] [Created] (FLINK-9770) UI jar list broken

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9770:
---

 Summary: UI jar list broken
 Key: FLINK-9770
 URL: https://issues.apache.org/jira/browse/FLINK-9770
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission, REST, Webfrontend
Affects Versions: 1.5.1, 1.6.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.5.1, 1.6.0


The jar listing in the UI is broken.

The {{JarListHandler}} expects a specific naming scheme (_) for 
uploaded jars, which the {{FileUploadHandler}} previously adhered to.

When the file uploads were generalized this naming scheme was removed from the 
{{FileUploadHandler}}, but neither was the {{JarListHandler}} adjusted nor was 
this behavior re-introduced in the {{JarUploadHandler}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9769) Job submission via WebUI broken

2018-07-05 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-9769:

Priority: Blocker  (was: Major)

> Job submission via WebUI broken
> ---
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.5.1, 1.6.0
>
>
> The rework of the {{FileUploadHandler}} apparently broke the Web UI job 
> submission.
> It would be great if someone could check whether this also occurs on 1.6.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-07-05 21:55:09,485 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   

[GitHub] flink pull request #6267: [FLINK-5750] Incorrect parse of brackets inside VA...

2018-07-05 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200478425
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetUnion.scala
 ---
@@ -36,22 +39,21 @@ import scala.collection.JavaConverters._
 class DataSetUnion(
--- End diff --

We need the same fix for `DataStreamUnion`


---


[jira] [Commented] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534111#comment-16534111
 ] 

ASF GitHub Bot commented on FLINK-5750:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200478263
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetUnion.scala
 ---
@@ -36,22 +39,21 @@ import scala.collection.JavaConverters._
 class DataSetUnion(
 cluster: RelOptCluster,
 traitSet: RelTraitSet,
-leftNode: RelNode,
-rightNode: RelNode,
-rowRelDataType: RelDataType)
-  extends BiRel(cluster, traitSet, leftNode, rightNode)
+inputs: JList[RelNode],
+rowRelDataType: RelDataType,
+all: Boolean)
--- End diff --

we don't need the `all` parameter because `DataStreamUnion` only supports 
`UNION ALL` semantics.


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> It seems that Calcite only generates non-binary Unions in rare cases 
> ({{(SELECT * FROM t) UNION ALL (SELECT * FROM t) UNION ALL (SELECT * FROM 
> t)}} results in two binary union operators) but the problem definitely needs 
> to be fixed.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534112#comment-16534112
 ] 

ASF GitHub Bot commented on FLINK-5750:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200478336
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetUnion.scala
 ---
@@ -36,22 +39,21 @@ import scala.collection.JavaConverters._
 class DataSetUnion(
 cluster: RelOptCluster,
 traitSet: RelTraitSet,
-leftNode: RelNode,
-rightNode: RelNode,
-rowRelDataType: RelDataType)
-  extends BiRel(cluster, traitSet, leftNode, rightNode)
+inputs: JList[RelNode],
+rowRelDataType: RelDataType,
+all: Boolean)
+  extends Union(cluster, traitSet, inputs, all)
--- End diff --

Change to `Union(cluster, traitSet, inputs, true)`


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> It seems that Calcite only generates non-binary Unions in rare cases 
> ({{(SELECT * FROM t) UNION ALL (SELECT * FROM t) UNION ALL (SELECT * FROM 
> t)}} results in two binary union operators) but the problem definitely needs 
> to be fixed.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6267: [FLINK-5750] Incorrect parse of brackets inside VA...

2018-07-05 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200478336
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetUnion.scala
 ---
@@ -36,22 +39,21 @@ import scala.collection.JavaConverters._
 class DataSetUnion(
 cluster: RelOptCluster,
 traitSet: RelTraitSet,
-leftNode: RelNode,
-rightNode: RelNode,
-rowRelDataType: RelDataType)
-  extends BiRel(cluster, traitSet, leftNode, rightNode)
+inputs: JList[RelNode],
+rowRelDataType: RelDataType,
+all: Boolean)
+  extends Union(cluster, traitSet, inputs, all)
--- End diff --

Change to `Union(cluster, traitSet, inputs, true)`


---


[jira] [Commented] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534113#comment-16534113
 ] 

ASF GitHub Bot commented on FLINK-5750:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200480534
  
--- Diff: 
flink-libraries/flink-table/src/test/java/org/apache/flink/table/runtime/batch/sql/JavaSqlITCase.java
 ---
@@ -73,6 +73,30 @@ public void testValues() throws Exception {
compareResultAsText(results, expected);
}
 
+   @Test
+   public void testValuesWithCast() throws Exception {
--- End diff --

Can you move this test to 
`org.apache.flink.table.runtime.batch.sql.SetOperatorsITCase` and also add one 
to `org.apache.flink.table.runtime.stream.sql.SetOperatorsITCase`?

In addition it would be good to have to plan tests for this query in 
`org.apache.flink.table.api.batch.sql.SetOperatorsTest` and 
`org.apache.flink.table.api.stream.sql.SetOperatorsTest`.


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> It seems that Calcite only generates non-binary Unions in rare cases 
> ({{(SELECT * FROM t) UNION ALL (SELECT * FROM t) UNION ALL (SELECT * FROM 
> t)}} results in two binary union operators) but the problem definitely needs 
> to be fixed.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534114#comment-16534114
 ] 

ASF GitHub Bot commented on FLINK-5750:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200478425
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetUnion.scala
 ---
@@ -36,22 +39,21 @@ import scala.collection.JavaConverters._
 class DataSetUnion(
--- End diff --

We need the same fix for `DataStreamUnion`


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> It seems that Calcite only generates non-binary Unions in rare cases 
> ({{(SELECT * FROM t) UNION ALL (SELECT * FROM t) UNION ALL (SELECT * FROM 
> t)}} results in two binary union operators) but the problem definitely needs 
> to be fixed.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6267: [FLINK-5750] Incorrect parse of brackets inside VA...

2018-07-05 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200480534
  
--- Diff: 
flink-libraries/flink-table/src/test/java/org/apache/flink/table/runtime/batch/sql/JavaSqlITCase.java
 ---
@@ -73,6 +73,30 @@ public void testValues() throws Exception {
compareResultAsText(results, expected);
}
 
+   @Test
+   public void testValuesWithCast() throws Exception {
--- End diff --

Can you move this test to 
`org.apache.flink.table.runtime.batch.sql.SetOperatorsITCase` and also add one 
to `org.apache.flink.table.runtime.stream.sql.SetOperatorsITCase`?

In addition it would be good to have to plan tests for this query in 
`org.apache.flink.table.api.batch.sql.SetOperatorsTest` and 
`org.apache.flink.table.api.stream.sql.SetOperatorsTest`.


---


[GitHub] flink pull request #6267: [FLINK-5750] Incorrect parse of brackets inside VA...

2018-07-05 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/6267#discussion_r200478263
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetUnion.scala
 ---
@@ -36,22 +39,21 @@ import scala.collection.JavaConverters._
 class DataSetUnion(
 cluster: RelOptCluster,
 traitSet: RelTraitSet,
-leftNode: RelNode,
-rightNode: RelNode,
-rowRelDataType: RelDataType)
-  extends BiRel(cluster, traitSet, leftNode, rightNode)
+inputs: JList[RelNode],
+rowRelDataType: RelDataType,
+all: Boolean)
--- End diff --

we don't need the `all` parameter because `DataStreamUnion` only supports 
`UNION ALL` semantics.


---


[jira] [Assigned] (FLINK-9769) Job submission via WebUI broken

2018-07-05 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-9769:
---

Assignee: Chesnay Schepler

> Job submission via WebUI broken
> ---
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
> Fix For: 1.5.1, 1.6.0
>
>
> The rework of the {{FileUploadHandler}} apparently broke the Web UI job 
> submission.
> It would be great if someone could check whether this also occurs on 1.6.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-07-05 21:55:09,485 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   

[jira] [Updated] (FLINK-9769) Job submission via WebUI broken

2018-07-05 Thread Dawid Wysakowicz (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-9769:

Affects Version/s: 1.6.0

> Job submission via WebUI broken
> ---
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.5.1, 1.6.0
>
>
> The rework of the {{FileUploadHandler}} apparently broke the Web UI job 
> submission.
> It would be great if someone could check whether this also occurs on 1.6.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-07-05 21:55:09,485 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> 

[jira] [Updated] (FLINK-9769) Job submission via WebUI broken

2018-07-05 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-9769:

Fix Version/s: 1.6.0

> Job submission via WebUI broken
> ---
>
> Key: FLINK-9769
> URL: https://issues.apache.org/jira/browse/FLINK-9769
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, REST, Webfrontend
>Affects Versions: 1.5.1, 1.6.0
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.5.1, 1.6.0
>
>
> The rework of the {{FileUploadHandler}} apparently broke the Web UI job 
> submission.
> It would be great if someone could check whether this also occurs on 1.6.
> {code}
> 2018-07-05 21:55:09,297 ERROR 
> org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>   at java.lang.Thread.run(Thread.java:745)
> 2018-07-05 21:55:09,485 ERROR 
> org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
> processing failed.
> java.nio.file.NoSuchFileException: 
> C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
>   at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>   at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
>   at 
> sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
>   at 
> sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
>   at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
>   at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
>   at java.nio.file.Files.walkFileTree(Files.java:2662)
>   at java.nio.file.Files.walkFileTree(Files.java:2742)
>   at 
> org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
>   at 
> org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
>   at 
> org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
>   at 
> 

[jira] [Commented] (FLINK-9769) Job submission via WebUI broken

2018-07-05 Thread Dawid Wysakowicz (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534102#comment-16534102
 ] 

Dawid Wysakowicz commented on FLINK-9769:
-

I have just checked on master's HEAD (cc595354e69d4ccb08b5e839095cc50fbe76b0e8) 
and have the same problem:
{code:java}
2018-07-05 22:21:11,523 ERROR 
org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
processing failed.
java.nio.file.NoSuchFileException: 
/tmp/flink-web-7c86699d-17ab-4212-83b1-33df8f645d8f/flink-web-upload/b4e73a20-c679-44de-93ba-abae49338505
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:106)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:142)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
2018-07-05 22:21:12,715 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
processing failed.
java.nio.file.NoSuchFileException: 
/tmp/flink-web-7c86699d-17ab-4212-83b1-33df8f645d8f/flink-web-upload/b4e73a20-c679-44de-93ba-abae49338505
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at 
sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at 
sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:106)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:142)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at 

[jira] [Updated] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5750:
-
Description: 
Calcite's union operator is supports more than two input relations. However, 
Flink's translation rules only consider the first two relations because we 
assumed that Calcite's union is binary. 
This problem exists for batch and streaming queries.

It seems that Calcite only generates non-binary Unions in rare cases ({{(SELECT 
* FROM t) UNION ALL (SELECT * FROM t) UNION ALL (SELECT * FROM t)}} results in 
two binary union operators) but the problem definitely needs to be fixed.

The following query can be used to validate the problem. 

{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}

AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}



  was:
Calcite's union operator is supports more than two input relations. However, 
Flink's translation rules only consider the first two relations because we 
assumed that Calcite's union is binary. 
This problem exists for batch and streaming queries.

The following query can be used to validate the problem. 

{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}

AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> It seems that Calcite only generates non-binary Unions in rare cases 
> ({{(SELECT * FROM t) UNION ALL (SELECT * FROM t) UNION ALL (SELECT * FROM 
> t)}} results in two binary union operators) but the problem definitely needs 
> to be fixed.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
> 

[jira] [Updated] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5750:
-
Description: 
Calcite's union operator is supports more than two input relations. However, 
Flink's translation rules only consider the first two relations because we 
assumed that Calcite's union is binary. 
This problem exists for batch and streaming queries.

The following query can be used to validate the problem. 

{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}

AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}

  was:
Calcite's union operator is supports more than two input relations. However, 
Flink's translation rules only consider the first two relations because we 
assumed that Calcite's union is binary. 
This problem exists for batch and streaming queries.

The following query can be used to validate the problem. 

{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}

AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}

It seems that 


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   

[jira] [Updated] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5750:
-
Description: 
Calcite's union operator is supports more than two input relations. However, 
Flink's translation rules only consider the first two relations because we 
assumed that Calcite's union is binary. 
This problem exists for batch and streaming queries.

The following query can be used to validate the problem. 

{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}

AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}

It seems that 

  was:
Calcite's union operator is supports more than two input relations. However, 
Flink's translation rules only consider the first two relations because we 
assumed that Calcite's union is binary. 
This problem exists for batch and streaming queries.

The following query can be used to validate the problem. 

{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}
AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   

[jira] [Updated] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5750:
-
Description: 
Calcite's union operator is supports more than two input relations. However, 
Flink's translation rules only consider the first two relations because we 
assumed that Calcite's union is binary. 
This problem exists for batch and streaming queries.

The following query can be used to validate the problem. 

{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}
AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}

  was:
{code:java}
@Test
public void testValuesWithCast() throws Exception {
ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
BatchTableEnvironment tableEnv = 
TableEnvironment.getTableEnvironment(env, config());

String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
"(2, cast(2 as BIGINT))," +
"(3, cast(3 as BIGINT))";
String sqlQuery2 = "VALUES (1,1)," +
"(2, 2)," +
"(3, 3)";
Table result = tableEnv.sql(sqlQuery);
DataSet resultSet = tableEnv.toDataSet(result, Row.class);
List results = resultSet.collect();

Table result2 = tableEnv.sql(sqlQuery2);
DataSet resultSet2 = tableEnv.toDataSet(result2, 
Row.class);
List results2 = resultSet2.collect();

String expected = "1,1\n2,2\n3,3";
compareResultAsText(results2, expected);
compareResultAsText(results, expected);
}
{code}
AR for {{results}} variable
{noformat}
java.lang.AssertionError: Different elements in arrays: expected 3 elements and 
received 2
 expected: [1,1, 2,2, 3,3]
 received: [1,1, 2,2] 
Expected :3
Actual   :2
{noformat}


> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String 

[jira] [Updated] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5750:
-
Affects Version/s: 1.6.0
   1.3.4
   1.5.0
   1.4.2

> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0, 1.3.4, 1.5.0, 1.4.2, 1.6.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> Calcite's union operator is supports more than two input relations. However, 
> Flink's translation rules only consider the first two relations because we 
> assumed that Calcite's union is binary. 
> This problem exists for batch and streaming queries.
> The following query can be used to validate the problem. 
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5750:
-
Priority: Critical  (was: Minor)

> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Critical
>  Labels: pull-request-available
>
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-5750) Incorrect translation of n-ary Union

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-5750:
-
Summary: Incorrect translation of n-ary Union  (was: Incorrect parse of 
brackets inside VALUES subquery)

> Incorrect translation of n-ary Union
> 
>
> Key: FLINK-5750
> URL: https://issues.apache.org/jira/browse/FLINK-5750
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Affects Versions: 1.2.0
>Reporter: Anton Mushin
>Assignee: Alexander Koltsov
>Priority: Minor
>  Labels: pull-request-available
>
> {code:java}
> @Test
>   public void testValuesWithCast() throws Exception {
>   ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>   BatchTableEnvironment tableEnv = 
> TableEnvironment.getTableEnvironment(env, config());
>   String sqlQuery = "VALUES (1, cast(1 as BIGINT) )," +
>   "(2, cast(2 as BIGINT))," +
>   "(3, cast(3 as BIGINT))";
>   String sqlQuery2 = "VALUES (1,1)," +
>   "(2, 2)," +
>   "(3, 3)";
>   Table result = tableEnv.sql(sqlQuery);
>   DataSet resultSet = tableEnv.toDataSet(result, Row.class);
>   List results = resultSet.collect();
>   Table result2 = tableEnv.sql(sqlQuery2);
>   DataSet resultSet2 = tableEnv.toDataSet(result2, 
> Row.class);
>   List results2 = resultSet2.collect();
>   String expected = "1,1\n2,2\n3,3";
>   compareResultAsText(results2, expected);
>   compareResultAsText(results, expected);
>   }
> {code}
> AR for {{results}} variable
> {noformat}
> java.lang.AssertionError: Different elements in arrays: expected 3 elements 
> and received 2
>  expected: [1,1, 2,2, 3,3]
>  received: [1,1, 2,2] 
> Expected :3
> Actual   :2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9769) Job submission via WebUI broken

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9769:
---

 Summary: Job submission via WebUI broken
 Key: FLINK-9769
 URL: https://issues.apache.org/jira/browse/FLINK-9769
 Project: Flink
  Issue Type: Bug
  Components: Job-Submission, REST, Webfrontend
Affects Versions: 1.5.1
Reporter: Chesnay Schepler
 Fix For: 1.5.1


The rework of the {{FileUploadHandler}} apparently broke the Web UI job 
submission.

It would be great if someone could check whether this also occurs on 1.6.

{code}

2018-07-05 21:55:09,297 ERROR 
org.apache.flink.runtime.rest.handler.job.JobsOverviewHandler  - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at 
org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
2018-07-05 21:55:09,485 ERROR 
org.apache.flink.runtime.webmonitor.handlers.JarListHandler   - Request 
processing failed.
java.nio.file.NoSuchFileException: 
C:\Users\Zento\AppData\Local\Temp\flink-web-2c7cae9f-e2d0-4a0e-8696-ef6894238a2e\flink-web-upload\b002df81-2d6f-4727-ae6e-aaa20be22b3b
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:79)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at 
sun.nio.fs.WindowsFileSystemProvider.readAttributes(WindowsFileSystemProvider.java:193)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:219)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at 
org.apache.flink.runtime.rest.handler.FileUploads.getUploadedFiles(FileUploads.java:68)
at 
org.apache.flink.runtime.rest.AbstractHandler.respondAsLeader(AbstractHandler.java:107)
at 
org.apache.flink.runtime.rest.handler.RedirectHandler.lambda$null$0(RedirectHandler.java:139)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
at 

[jira] [Commented] (FLINK-8094) Support other types for ExistingField rowtime extractor

2018-07-05 Thread Fabian Hueske (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534085#comment-16534085
 ] 

Fabian Hueske commented on FLINK-8094:
--

Thanks for the contribution [~kabhwan]!
Btw., if given you Contributor permissions for Jira. You can now assign issues 
to yourself if you want to.

Best, Fabian

> Support other types for ExistingField rowtime extractor
> ---
>
> Key: FLINK-8094
> URL: https://issues.apache.org/jira/browse/FLINK-8094
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Xingcan Cui
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Currently, the {{ExistingField}} rowtime extractor only supports {{Long}} and 
> {{Timestamp}} fields. To enable other data types (e.g., {{String}}), we can 
> provide some system extraction functions and allow users to pass some 
> parameters via the constructor of {{ExistingField}}. There's [a simple 
> demo|https://github.com/xccui/flink/commit/afcc5f1a0ad92db08294199e61be5df72c1514f8]
>  which enables the {{String}} type rowtime by adding a UDF {{str2EventTime}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9681) Make sure minRetentionTime not equal to maxRetentionTime

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9681.

   Resolution: Implemented
Fix Version/s: 1.6.0

Implemented for 1.6.0 with cfd0206b39b08691b832ea6324e02a5bd3a1533e

> Make sure minRetentionTime not equal to maxRetentionTime
> 
>
> Key: FLINK-9681
> URL: https://issues.apache.org/jira/browse/FLINK-9681
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Currently, for a group by(or other operators), if minRetentionTime equals to 
> maxRetentionTime, the group by operator will register a timer for each record 
> coming at different time which cause performance problem. The reasoning for 
> having two parameters is that we can avoid to register many timers if we have 
> more freedom when to discard state. As min equals to max cause performance 
> problem it is better to make sure these two parameters are not same.
> Any suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-8094) Support other types for ExistingField rowtime extractor

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-8094.

   Resolution: Implemented
Fix Version/s: 1.6.0

Implemented for 1.6.0 with cc595354e69d4ccb08b5e839095cc50fbe76b0e8

> Support other types for ExistingField rowtime extractor
> ---
>
> Key: FLINK-8094
> URL: https://issues.apache.org/jira/browse/FLINK-8094
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Xingcan Cui
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> Currently, the {{ExistingField}} rowtime extractor only supports {{Long}} and 
> {{Timestamp}} fields. To enable other data types (e.g., {{String}}), we can 
> provide some system extraction functions and allow users to pass some 
> parameters via the constructor of {{ExistingField}}. There's [a simple 
> demo|https://github.com/xccui/flink/commit/afcc5f1a0ad92db08294199e61be5df72c1514f8]
>  which enables the {{String}} type rowtime by adding a UDF {{str2EventTime}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-8094) Support other types for ExistingField rowtime extractor

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske reassigned FLINK-8094:


Assignee: Jungtaek Lim

> Support other types for ExistingField rowtime extractor
> ---
>
> Key: FLINK-8094
> URL: https://issues.apache.org/jira/browse/FLINK-8094
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Xingcan Cui
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the {{ExistingField}} rowtime extractor only supports {{Long}} and 
> {{Timestamp}} fields. To enable other data types (e.g., {{String}}), we can 
> provide some system extraction functions and allow users to pass some 
> parameters via the constructor of {{ExistingField}}. There's [a simple 
> demo|https://github.com/xccui/flink/commit/afcc5f1a0ad92db08294199e61be5df72c1514f8]
>  which enables the {{String}} type rowtime by adding a UDF {{str2EventTime}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9742) Expose Expression.resultType to public

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9742.

   Resolution: Fixed
Fix Version/s: 1.6.0

Fixed for 1.6.0 with 5cb080cd785658fcb817a00f51e12d6fcbc78b33

> Expose Expression.resultType to public
> --
>
> Key: FLINK-9742
> URL: https://issues.apache.org/jira/browse/FLINK-9742
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Jungtaek Lim
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> I have use case of TableSource which requires custom implementation of 
> TimestampExtractor. To ensure new TimestampExtractor to cover more general 
> use cases, accessing Expression.resultType is necessary, but its scope is now 
> defined as package private for "org.apache.flink".
> Below is the implementation of custom TimestampExtractor which leverages 
> Expression.resultType, hence had to place it to org.apache.flink package 
> (looks like a hack).
> {code:java}
> class IsoDateStringAwareExistingField(val field: String) extends 
> TimestampExtractor {
>   override def getArgumentFields: Array[String] = Array(field)
>   override def validateArgumentFields(argumentFieldTypes: 
> Array[TypeInformation[_]]): Unit = {
> val fieldType = argumentFieldTypes(0)
> fieldType match {
>   case Types.LONG => // OK
>   case Types.SQL_TIMESTAMP => // OK
>   case Types.STRING => // OK
>   case _: TypeInformation[_] =>
> throw ValidationException(
>   s"Field '$field' must be of type Long or Timestamp or String but is 
> of type $fieldType.")
> }
>   }
>   override def getExpression(fieldAccesses: Array[ResolvedFieldReference]): 
> Expression = {
> val fieldAccess: Expression = fieldAccesses(0)
> fieldAccess.resultType match {
>   case Types.LONG =>
> // access LONG field
> fieldAccess
>   case Types.SQL_TIMESTAMP =>
> // cast timestamp to long
> Cast(fieldAccess, Types.LONG)
>   case Types.STRING =>
> Cast(Cast(fieldAccess, SqlTimeTypeInfo.TIMESTAMP), Types.LONG)
> }
>   }
> }{code}
> It would be better to just make Expression.resultType public to cover other 
> cases as well. (I'm not sure other methods would be also better to be public 
> as well.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9742) Expose Expression.resultType to public

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske reassigned FLINK-9742:


Assignee: Jungtaek Lim

> Expose Expression.resultType to public
> --
>
> Key: FLINK-9742
> URL: https://issues.apache.org/jira/browse/FLINK-9742
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Jungtaek Lim
>Assignee: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> I have use case of TableSource which requires custom implementation of 
> TimestampExtractor. To ensure new TimestampExtractor to cover more general 
> use cases, accessing Expression.resultType is necessary, but its scope is now 
> defined as package private for "org.apache.flink".
> Below is the implementation of custom TimestampExtractor which leverages 
> Expression.resultType, hence had to place it to org.apache.flink package 
> (looks like a hack).
> {code:java}
> class IsoDateStringAwareExistingField(val field: String) extends 
> TimestampExtractor {
>   override def getArgumentFields: Array[String] = Array(field)
>   override def validateArgumentFields(argumentFieldTypes: 
> Array[TypeInformation[_]]): Unit = {
> val fieldType = argumentFieldTypes(0)
> fieldType match {
>   case Types.LONG => // OK
>   case Types.SQL_TIMESTAMP => // OK
>   case Types.STRING => // OK
>   case _: TypeInformation[_] =>
> throw ValidationException(
>   s"Field '$field' must be of type Long or Timestamp or String but is 
> of type $fieldType.")
> }
>   }
>   override def getExpression(fieldAccesses: Array[ResolvedFieldReference]): 
> Expression = {
> val fieldAccess: Expression = fieldAccesses(0)
> fieldAccess.resultType match {
>   case Types.LONG =>
> // access LONG field
> fieldAccess
>   case Types.SQL_TIMESTAMP =>
> // cast timestamp to long
> Cast(fieldAccess, Types.LONG)
>   case Types.STRING =>
> Cast(Cast(fieldAccess, SqlTimeTypeInfo.TIMESTAMP), Types.LONG)
> }
>   }
> }{code}
> It would be better to just make Expression.resultType public to cover other 
> cases as well. (I'm not sure other methods would be also better to be public 
> as well.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (FLINK-9742) Expose Expression.resultType to public

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske reopened FLINK-9742:
--

> Expose Expression.resultType to public
> --
>
> Key: FLINK-9742
> URL: https://issues.apache.org/jira/browse/FLINK-9742
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
>
> I have use case of TableSource which requires custom implementation of 
> TimestampExtractor. To ensure new TimestampExtractor to cover more general 
> use cases, accessing Expression.resultType is necessary, but its scope is now 
> defined as package private for "org.apache.flink".
> Below is the implementation of custom TimestampExtractor which leverages 
> Expression.resultType, hence had to place it to org.apache.flink package 
> (looks like a hack).
> {code:java}
> class IsoDateStringAwareExistingField(val field: String) extends 
> TimestampExtractor {
>   override def getArgumentFields: Array[String] = Array(field)
>   override def validateArgumentFields(argumentFieldTypes: 
> Array[TypeInformation[_]]): Unit = {
> val fieldType = argumentFieldTypes(0)
> fieldType match {
>   case Types.LONG => // OK
>   case Types.SQL_TIMESTAMP => // OK
>   case Types.STRING => // OK
>   case _: TypeInformation[_] =>
> throw ValidationException(
>   s"Field '$field' must be of type Long or Timestamp or String but is 
> of type $fieldType.")
> }
>   }
>   override def getExpression(fieldAccesses: Array[ResolvedFieldReference]): 
> Expression = {
> val fieldAccess: Expression = fieldAccesses(0)
> fieldAccess.resultType match {
>   case Types.LONG =>
> // access LONG field
> fieldAccess
>   case Types.SQL_TIMESTAMP =>
> // cast timestamp to long
> Cast(fieldAccess, Types.LONG)
>   case Types.STRING =>
> Cast(Cast(fieldAccess, SqlTimeTypeInfo.TIMESTAMP), Types.LONG)
> }
>   }
> }{code}
> It would be better to just make Expression.resultType public to cover other 
> cases as well. (I'm not sure other methods would be also better to be public 
> as well.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9742) Expose Expression.resultType to public

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9742.

Resolution: Fixed

> Expose Expression.resultType to public
> --
>
> Key: FLINK-9742
> URL: https://issues.apache.org/jira/browse/FLINK-9742
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
>
> I have use case of TableSource which requires custom implementation of 
> TimestampExtractor. To ensure new TimestampExtractor to cover more general 
> use cases, accessing Expression.resultType is necessary, but its scope is now 
> defined as package private for "org.apache.flink".
> Below is the implementation of custom TimestampExtractor which leverages 
> Expression.resultType, hence had to place it to org.apache.flink package 
> (looks like a hack).
> {code:java}
> class IsoDateStringAwareExistingField(val field: String) extends 
> TimestampExtractor {
>   override def getArgumentFields: Array[String] = Array(field)
>   override def validateArgumentFields(argumentFieldTypes: 
> Array[TypeInformation[_]]): Unit = {
> val fieldType = argumentFieldTypes(0)
> fieldType match {
>   case Types.LONG => // OK
>   case Types.SQL_TIMESTAMP => // OK
>   case Types.STRING => // OK
>   case _: TypeInformation[_] =>
> throw ValidationException(
>   s"Field '$field' must be of type Long or Timestamp or String but is 
> of type $fieldType.")
> }
>   }
>   override def getExpression(fieldAccesses: Array[ResolvedFieldReference]): 
> Expression = {
> val fieldAccess: Expression = fieldAccesses(0)
> fieldAccess.resultType match {
>   case Types.LONG =>
> // access LONG field
> fieldAccess
>   case Types.SQL_TIMESTAMP =>
> // cast timestamp to long
> Cast(fieldAccess, Types.LONG)
>   case Types.STRING =>
> Cast(Cast(fieldAccess, SqlTimeTypeInfo.TIMESTAMP), Types.LONG)
> }
>   }
> }{code}
> It would be better to just make Expression.resultType public to cover other 
> cases as well. (I'm not sure other methods would be also better to be public 
> as well.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-9581) Redundant spaces for Collect at sql.md

2018-07-05 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-9581.

   Resolution: Fixed
Fix Version/s: 1.6.0
   1.5.1
   1.4.3

Fixed for 1.4.3 with cb4a8fa136495ec657c875e267db435fa16f479f
Fixed for 1.5.1 with f7997af4368a7b5f424ea8495849647697e1ed28
Fixed for 1.6.0 with 84fbbfe1258c6c9c9aed919946f9652f7198f96b

> Redundant spaces for Collect at sql.md
> --
>
> Key: FLINK-9581
> URL: https://issues.apache.org/jira/browse/FLINK-9581
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API  SQL
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.4.3, 1.5.1, 1.6.0
>
> Attachments: collect.png
>
>
> could be seen at 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql.html
> + attach



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-8094) Support other types for ExistingField rowtime extractor

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534076#comment-16534076
 ] 

ASF GitHub Bot commented on FLINK-8094:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6253


> Support other types for ExistingField rowtime extractor
> ---
>
> Key: FLINK-8094
> URL: https://issues.apache.org/jira/browse/FLINK-8094
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Xingcan Cui
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the {{ExistingField}} rowtime extractor only supports {{Long}} and 
> {{Timestamp}} fields. To enable other data types (e.g., {{String}}), we can 
> provide some system extraction functions and allow users to pass some 
> parameters via the constructor of {{ExistingField}}. There's [a simple 
> demo|https://github.com/xccui/flink/commit/afcc5f1a0ad92db08294199e61be5df72c1514f8]
>  which enables the {{String}} type rowtime by adding a UDF {{str2EventTime}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9681) Make sure minRetentionTime not equal to maxRetentionTime

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534073#comment-16534073
 ] 

ASF GitHub Bot commented on FLINK-9681:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6255


> Make sure minRetentionTime not equal to maxRetentionTime
> 
>
> Key: FLINK-9681
> URL: https://issues.apache.org/jira/browse/FLINK-9681
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
>
> Currently, for a group by(or other operators), if minRetentionTime equals to 
> maxRetentionTime, the group by operator will register a timer for each record 
> coming at different time which cause performance problem. The reasoning for 
> having two parameters is that we can avoid to register many timers if we have 
> more freedom when to discard state. As min equals to max cause performance 
> problem it is better to make sure these two parameters are not same.
> Any suggestions are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6252: [FLINK-9742][Table API & SQL] Expose Expression.re...

2018-07-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6252


---


[jira] [Commented] (FLINK-9581) Redundant spaces for Collect at sql.md

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534074#comment-16534074
 ] 

ASF GitHub Bot commented on FLINK-9581:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6161


> Redundant spaces for Collect at sql.md
> --
>
> Key: FLINK-9581
> URL: https://issues.apache.org/jira/browse/FLINK-9581
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API  SQL
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: collect.png
>
>
> could be seen at 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql.html
> + attach



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6255: [FLINK-9681] [table] Make sure difference between ...

2018-07-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6255


---


[jira] [Commented] (FLINK-9742) Expose Expression.resultType to public

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534075#comment-16534075
 ] 

ASF GitHub Bot commented on FLINK-9742:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6252


> Expose Expression.resultType to public
> --
>
> Key: FLINK-9742
> URL: https://issues.apache.org/jira/browse/FLINK-9742
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API  SQL
>Affects Versions: 1.5.0
>Reporter: Jungtaek Lim
>Priority: Major
>  Labels: pull-request-available
>
> I have use case of TableSource which requires custom implementation of 
> TimestampExtractor. To ensure new TimestampExtractor to cover more general 
> use cases, accessing Expression.resultType is necessary, but its scope is now 
> defined as package private for "org.apache.flink".
> Below is the implementation of custom TimestampExtractor which leverages 
> Expression.resultType, hence had to place it to org.apache.flink package 
> (looks like a hack).
> {code:java}
> class IsoDateStringAwareExistingField(val field: String) extends 
> TimestampExtractor {
>   override def getArgumentFields: Array[String] = Array(field)
>   override def validateArgumentFields(argumentFieldTypes: 
> Array[TypeInformation[_]]): Unit = {
> val fieldType = argumentFieldTypes(0)
> fieldType match {
>   case Types.LONG => // OK
>   case Types.SQL_TIMESTAMP => // OK
>   case Types.STRING => // OK
>   case _: TypeInformation[_] =>
> throw ValidationException(
>   s"Field '$field' must be of type Long or Timestamp or String but is 
> of type $fieldType.")
> }
>   }
>   override def getExpression(fieldAccesses: Array[ResolvedFieldReference]): 
> Expression = {
> val fieldAccess: Expression = fieldAccesses(0)
> fieldAccess.resultType match {
>   case Types.LONG =>
> // access LONG field
> fieldAccess
>   case Types.SQL_TIMESTAMP =>
> // cast timestamp to long
> Cast(fieldAccess, Types.LONG)
>   case Types.STRING =>
> Cast(Cast(fieldAccess, SqlTimeTypeInfo.TIMESTAMP), Types.LONG)
> }
>   }
> }{code}
> It would be better to just make Expression.resultType public to cover other 
> cases as well. (I'm not sure other methods would be also better to be public 
> as well.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6161: [hotfix] [docs][FLINK-9581] Typo: extra spaces rem...

2018-07-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6161


---


[GitHub] flink pull request #6253: [FLINK-8094][Table API & SQL] Support other types ...

2018-07-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/6253


---


[GitHub] flink issue #6161: [hotfix] [docs][FLINK-9581] Typo: extra spaces removed to...

2018-07-05 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/6161
  
Documentation fixes are usually not critical to include in a release 
because the docs are always built from the most recent release branch. So also 
documentation changes that are not included in a release will be published 
shortly after being committed.

I'll merge this PR.

Btw. it is OK to create a hotfix (i.e., a PR without creating a JIRA issue) 
for minor fixes like this.

Thanks, Fabian


---


[jira] [Commented] (FLINK-9581) Redundant spaces for Collect at sql.md

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16534070#comment-16534070
 ] 

ASF GitHub Bot commented on FLINK-9581:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/6161
  
Documentation fixes are usually not critical to include in a release 
because the docs are always built from the most recent release branch. So also 
documentation changes that are not included in a release will be published 
shortly after being committed.

I'll merge this PR.

Btw. it is OK to create a hotfix (i.e., a PR without creating a JIRA issue) 
for minor fixes like this.

Thanks, Fabian


> Redundant spaces for Collect at sql.md
> --
>
> Key: FLINK-9581
> URL: https://issues.apache.org/jira/browse/FLINK-9581
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API  SQL
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: collect.png
>
>
> could be seen at 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql.html
> + attach



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9682) Add setDescription to execution environment and display it in the UI

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533980#comment-16533980
 ] 

ASF GitHub Bot commented on FLINK-9682:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/6266
  
Travis failed, please investigate.


> Add setDescription to execution environment and display it in the UI
> 
>
> Key: FLINK-9682
> URL: https://issues.apache.org/jira/browse/FLINK-9682
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API, Webfrontend
>Affects Versions: 1.5.0
>Reporter: Elias Levy
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> Currently you can provide a job name to {{execute}} in the execution 
> environment.  In an environment where many version of a job may be executing, 
> such as a development or test environment, identifying which running job is 
> of a specific version via the UI can be difficult unless the version is 
> embedded into the job name given the {{execute}}.  But the job name is uses 
> for other purposes, such as for namespacing metrics.  Thus, it is not ideal 
> to modify the job name, as that could require modifying metric dashboards and 
> monitors each time versions change.
> I propose a new method be added to the execution environment, 
> {{setDescription}}, that would allow a user to pass in an arbitrary 
> description that would be displayed in the dashboard, allowing users to 
> distinguish jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink issue #6266: [FLINK-9682] Add setDescription to execution environment ...

2018-07-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/6266
  
Travis failed, please investigate.


---


[jira] [Created] (FLINK-9768) Only build flink-dist for binary releases

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9768:
---

 Summary: Only build flink-dist for binary releases
 Key: FLINK-9768
 URL: https://issues.apache.org/jira/browse/FLINK-9768
 Project: Flink
  Issue Type: Improvement
  Components: Release System
Affects Versions: 1.5.0, 1.6.0
Reporter: Chesnay Schepler


To speed up the release process for the convenience binaries i propose to only 
build flink-dist and required modules (including flink-shaded-hadoop2-uber), as 
only this module is actually required.

We can also look into skipping the compilation of tests and disabling the 
checkstyle plugin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9730) avoid access static via class reference

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533952#comment-16533952
 ] 

ASF GitHub Bot commented on FLINK-9730:
---

Github user nekrassov commented on a diff in the pull request:

https://github.com/apache/flink/pull/6247#discussion_r200434401
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/async/AsyncIOExample.java
 ---
@@ -116,8 +116,8 @@ public void cancel() {
private static class SampleAsyncFunction extends 
RichAsyncFunction {
private static final long serialVersionUID = 
2098635244857937717L;
 
-   private static ExecutorService executorService;
-   private static Random random;
--- End diff --

What makes it a singleton?.. 
Nothing stops me from adding a call in main():
`AsyncFunction function2 = new 
SampleAsyncFunction(sleepFactor2, failRatio2, shutdownWaitTS2);`

IMHO, we need to put back "static" on executorService and random. And add 
"static" to counter.


> avoid access static via class reference
> ---
>
> Key: FLINK-9730
> URL: https://issues.apache.org/jira/browse/FLINK-9730
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.5.0
>Reporter: lamber-ken
>Assignee: lamber-ken
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> [code refactor] access static via class reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6247: [FLINK-9730] [code refactor] fix access static via...

2018-07-05 Thread nekrassov
Github user nekrassov commented on a diff in the pull request:

https://github.com/apache/flink/pull/6247#discussion_r200434401
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/async/AsyncIOExample.java
 ---
@@ -116,8 +116,8 @@ public void cancel() {
private static class SampleAsyncFunction extends 
RichAsyncFunction {
private static final long serialVersionUID = 
2098635244857937717L;
 
-   private static ExecutorService executorService;
-   private static Random random;
--- End diff --

What makes it a singleton?.. 
Nothing stops me from adding a call in main():
`AsyncFunction function2 = new 
SampleAsyncFunction(sleepFactor2, failRatio2, shutdownWaitTS2);`

IMHO, we need to put back "static" on executorService and random. And add 
"static" to counter.


---


[jira] [Updated] (FLINK-3109) Join two streams with two different buffer time

2018-07-05 Thread Chesnay Schepler (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-3109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-3109:

Fix Version/s: (was: 0.10.2)

> Join two streams with two different buffer time
> ---
>
> Key: FLINK-3109
> URL: https://issues.apache.org/jira/browse/FLINK-3109
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Affects Versions: 0.10.1
>Reporter: Wang Yangjun
>Priority: Major
>  Labels: easyfix, patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Current Flink streaming only supports join two streams on the same window. 
> How to solve this problem?
> For example, there are two streams. One is advertisements showed to users. 
> The tuple in which could be described as (id, showed timestamp). The other 
> one is click stream -- (id, clicked timestamp). We want get a joined stream, 
> which includes all the advertisement that is clicked by user in 20 minutes 
> after showed.
> It is possible that after an advertisement is shown, some user click it 
> immediately. It is possible that "click" message arrives server earlier than 
> "show" message because of Internet delay. We assume that the maximum delay is 
> one minute.
> Then the need is that we should alway keep a buffer(20 mins) of "show" stream 
> and another buffer(1 min) of "click" stream.
> It would be grate that there is such an API like.
> showStream.join(clickStream)
> .where(keySelector)
> .buffer(Time.of(20, TimeUnit.MINUTES))
> .equalTo(keySelector)
> .buffer(Time.of(1, TimeUnit.MINUTES))
> .apply(JoinFunction)
> http://stackoverflow.com/questions/33849462/how-to-avoid-repeated-tuples-in-flink-slide-window-join/34024149#34024149



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9730) avoid access static via class reference

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533934#comment-16533934
 ] 

ASF GitHub Bot commented on FLINK-9730:
---

Github user lamber-ken commented on a diff in the pull request:

https://github.com/apache/flink/pull/6247#discussion_r200431068
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/async/AsyncIOExample.java
 ---
@@ -116,8 +116,8 @@ public void cancel() {
private static class SampleAsyncFunction extends 
RichAsyncFunction {
private static final long serialVersionUID = 
2098635244857937717L;
 
-   private static ExecutorService executorService;
-   private static Random random;
--- End diff --

SampleAsyncFunction is used as singleton, so synchronize is needed


> avoid access static via class reference
> ---
>
> Key: FLINK-9730
> URL: https://issues.apache.org/jira/browse/FLINK-9730
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.5.0
>Reporter: lamber-ken
>Assignee: lamber-ken
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> [code refactor] access static via class reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6247: [FLINK-9730] [code refactor] fix access static via...

2018-07-05 Thread lamber-ken
Github user lamber-ken commented on a diff in the pull request:

https://github.com/apache/flink/pull/6247#discussion_r200431068
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/async/AsyncIOExample.java
 ---
@@ -116,8 +116,8 @@ public void cancel() {
private static class SampleAsyncFunction extends 
RichAsyncFunction {
private static final long serialVersionUID = 
2098635244857937717L;
 
-   private static ExecutorService executorService;
-   private static Random random;
--- End diff --

SampleAsyncFunction is used as singleton, so synchronize is needed


---


[jira] [Created] (FLINK-9767) Add instructions to generate tag to release guide

2018-07-05 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9767:
---

 Summary: Add instructions to generate tag to release guide
 Key: FLINK-9767
 URL: https://issues.apache.org/jira/browse/FLINK-9767
 Project: Flink
  Issue Type: Improvement
  Components: Release System
Affects Versions: 1.5.0, 1.6.0
Reporter: Chesnay Schepler


The release scripts instruct the tell the user to create a git tag, but don't 
provide instructions on doing so.
The release guide instructs the user to create a release tag by copying the tag 
for the last RC, but the guide itself never says to generate a tag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9730) avoid access static via class reference

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533904#comment-16533904
 ] 

ASF GitHub Bot commented on FLINK-9730:
---

Github user nekrassov commented on a diff in the pull request:

https://github.com/apache/flink/pull/6247#discussion_r200423552
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/async/AsyncIOExample.java
 ---
@@ -116,8 +116,8 @@ public void cancel() {
private static class SampleAsyncFunction extends 
RichAsyncFunction {
private static final long serialVersionUID = 
2098635244857937717L;
 
-   private static ExecutorService executorService;
-   private static Random random;
--- End diff --

No, my point is: if we are making executorService and random 
instance-specific - then we don't need to synchronize on class (lines 148 and 
163).
If we want to keep executorService and random as static, then we need to 
make counter static too (line 122). 
I think having all three (executorService, random, counter) as static is 
best.

Having executorService and random as static, but counter as 
instance-specific will not work when someone creates a second instance of 
SampleAsyncFunction. In second SampleAsyncFunction the counter will be 0 and we 
will re-intialize static executorService and random, thus interfering with the 
first SampleAsyncFunction object.


> avoid access static via class reference
> ---
>
> Key: FLINK-9730
> URL: https://issues.apache.org/jira/browse/FLINK-9730
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.5.0
>Reporter: lamber-ken
>Assignee: lamber-ken
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> [code refactor] access static via class reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9581) Redundant spaces for Collect at sql.md

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533903#comment-16533903
 ] 

ASF GitHub Bot commented on FLINK-9581:
---

Github user snuyanzin commented on the issue:

https://github.com/apache/flink/pull/6161
  
@fhueske thank you for review
@tillrohrmann if you took #6258 and @fhueske reviewed this may be it also 
makes sense to take into 1.5.1?


> Redundant spaces for Collect at sql.md
> --
>
> Key: FLINK-9581
> URL: https://issues.apache.org/jira/browse/FLINK-9581
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table API  SQL
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: collect.png
>
>
> could be seen at 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql.html
> + attach



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] flink pull request #6247: [FLINK-9730] [code refactor] fix access static via...

2018-07-05 Thread nekrassov
Github user nekrassov commented on a diff in the pull request:

https://github.com/apache/flink/pull/6247#discussion_r200423552
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/async/AsyncIOExample.java
 ---
@@ -116,8 +116,8 @@ public void cancel() {
private static class SampleAsyncFunction extends 
RichAsyncFunction {
private static final long serialVersionUID = 
2098635244857937717L;
 
-   private static ExecutorService executorService;
-   private static Random random;
--- End diff --

No, my point is: if we are making executorService and random 
instance-specific - then we don't need to synchronize on class (lines 148 and 
163).
If we want to keep executorService and random as static, then we need to 
make counter static too (line 122). 
I think having all three (executorService, random, counter) as static is 
best.

Having executorService and random as static, but counter as 
instance-specific will not work when someone creates a second instance of 
SampleAsyncFunction. In second SampleAsyncFunction the counter will be 0 and we 
will re-intialize static executorService and random, thus interfering with the 
first SampleAsyncFunction object.


---


[GitHub] flink issue #6161: [hotfix] [docs][FLINK-9581] Typo: extra spaces removed to...

2018-07-05 Thread snuyanzin
Github user snuyanzin commented on the issue:

https://github.com/apache/flink/pull/6161
  
@fhueske thank you for review
@tillrohrmann if you took #6258 and @fhueske reviewed this may be it also 
makes sense to take into 1.5.1?


---


[jira] [Commented] (FLINK-9730) avoid access static via class reference

2018-07-05 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533895#comment-16533895
 ] 

ASF GitHub Bot commented on FLINK-9730:
---

Github user lamber-ken commented on a diff in the pull request:

https://github.com/apache/flink/pull/6247#discussion_r200421156
  
--- Diff: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/async/AsyncIOExample.java
 ---
@@ -179,7 +179,7 @@ public void close() throws Exception {
 
@Override
public void asyncInvoke(final Integer input, final 
ResultFuture resultFuture) throws Exception {
-   this.executorService.submit(new Runnable() {
+   executorService.submit(new Runnable() {
--- End diff --

don't feel sorry, you are welcome.


> avoid access static via class reference
> ---
>
> Key: FLINK-9730
> URL: https://issues.apache.org/jira/browse/FLINK-9730
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.5.0
>Reporter: lamber-ken
>Assignee: lamber-ken
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.6.0
>
>
> [code refactor] access static via class reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >