[GitHub] [flink] dianfu commented on a change in pull request #12841: [FLINK-18490][python] Extract the implementation logic of Beam in AbstractPythonFunctionOperator

2020-07-08 Thread GitBox


dianfu commented on a change in pull request #12841:
URL: https://github.com/apache/flink/pull/12841#discussion_r451938393



##
File path: 
flink-python/src/main/java/org/apache/flink/python/env/ProcessEnvironment.java
##
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.python.env;
+
+import org.apache.flink.annotation.Internal;
+
+import java.util.Map;
+
+/**
+ * A {@link PythonEnvironment} for executing UDFs in Process.
+ */
+@Internal
+public class ProcessEnvironment implements PythonEnvironment {

Review comment:
   rename to ProcessPythonEnvironment?

##
File path: 
flink-python/src/main/java/org/apache/flink/streaming/api/operators/python/AbstractPythonFunctionOperator.java
##
@@ -320,23 +307,20 @@ private void checkInvokeFinishBundleByCount() throws 
Exception {
 */
private void checkInvokeFinishBundleByTime() throws Exception {
long now = 
getProcessingTimeService().getCurrentProcessingTime();
-   if (now - lastFinishBundleTime >= maxBundleTimeMills) {
+   if (now - lastFinishBundleTime >= maxBundleTimeMills && 
elementCount > 0) {
invokeFinishBundle();
}
}
 
-   private void invokeFinishBundle() throws Exception {
-   if (bundleStarted.compareAndSet(true, false)) {
-   pythonFunctionRunner.finishBundle();
-
-   emitResults();
-   elementCount = 0;
-   lastFinishBundleTime = 
getProcessingTimeService().getCurrentProcessingTime();
-   // callback only after current bundle was fully 
finalized
-   if (bundleFinishedCallback != null) {
-   bundleFinishedCallback.run();
-   bundleFinishedCallback = null;
-   }
+   protected void invokeFinishBundle() throws Exception {

Review comment:
   There are many places calling this method without checking the 
elementCount. What about wrapping the logic in an **if** check and then there 
is no need to check it everywhere calling this method.
   ```
   if (elementCount > 0) {
   xxx
   }
   ```
   

##
File path: 
flink-python/src/main/java/org/apache/flink/table/runtime/operators/python/scalar/AbstractRowPythonScalarFunctionOperator.java
##
@@ -90,4 +90,10 @@ public void bufferInput(CRow input) {
public Row getFunctionInput(CRow element) {
return Row.project(element.row(), 
userDefinedFunctionInputOffsets);
}
+
+   @Override
+   @SuppressWarnings("unchecked")
+   public TypeSerializer getInputTypeSerializer() {

Review comment:
   ditto

##
File path: 
flink-python/src/main/java/org/apache/flink/table/runtime/operators/python/scalar/arrow/RowDataArrowPythonScalarFunctionOperator.java
##
@@ -76,53 +70,74 @@ public RowDataArrowPythonScalarFunctionOperator(
@Override
public void open() throws Exception {
super.open();
-   allocator = 
ArrowUtils.getRootAllocator().newChildAllocator("reader", 0, Long.MAX_VALUE);
-   reader = new ArrowStreamReader(bais, allocator);
+   maxArrowBatchSize = 
Math.min(getPythonConfig().getMaxArrowBatchSize(), maxBundleSize);
+   arrowSerializer = new 
RowDataArrowSerializer(userDefinedFunctionInputType, 
userDefinedFunctionOutputType);
+   arrowSerializer.open(bais, baos);
+   currentBatchCount = 0;
}
 
@Override
-   public void close() throws Exception {
-   try {
-   super.close();
-   } finally {
-   reader.close();
-   allocator.close();
+   public void processElement(StreamRecord element) throws 
Exception {
+   RowData value = element.getValue();
+   bufferInput(value);
+   arrowSerializer.dump(getFunctionInput(value));
+   currentBatchCount++;
+   if (currentBatchCount >= maxArrowBatchSize) {
+   

[GitHub] [flink] klion26 commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-07-08 Thread GitBox


klion26 commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r451970665



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the 
build side table. In our example, the query is using the processing-time 
notion, so a newly appended order would always be joined with the most recent 
version of `LatestRates` when executing the operation. Note that the result is 
not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用 `processing-time` 作为处理时间,因而新增订单将始终与表 
`LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。
+
+与[常规 Join](#regular-joins) 相比,尽管构建侧表的数据发生了变化,但时态表 Join 的变化前结果不会随之变化。而且时态表 Join 
运算非常轻量级且不会保留任何状态。
 
-In contrast to [regular joins](#regular-joins), the previous results of the 
temporal table join will not be affected despite the changes on the build side. 
Also, the temporal table join operator is very lightweight and does not keep 
any state.
+与[时间区间 Join](#interval-joins) 相比,时态表 Join 没有定义决定哪些记录将被 Join 的时间窗口。
+探针侧的记录将总是与构建侧在对应 `processing time` 时间的最新数据执行 Join。因而构建侧的数据可能是任意旧的。
 
-Compared to [interval joins](#interval-joins), temporal table joins do not 
define a time window within which the records will be joined.
-Records from the probe side are always joined with the build side's latest 
version at processing time. Thus, records on the build side might be 
arbitrarily old.
+[时态表函数 Join](#join-with-a-temporal-table-function) 和时态表 Join都有类似的功能,但是有不同的 SQL 
语法和 runtime 实现:

Review comment:
   ```suggestion
   [时态表函数 Join](#join-with-a-temporal-table-function) 和时态表 Join 都有类似的功能,但是有不同的 
SQL 语法和 runtime 实现:
   ```

##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -327,10 +313,10 @@ FROM table1 [AS ]
 ON table1.column-name1 = table2.column-name1
 {% endhighlight %}
 
-Currently, only support INNER JOIN and LEFT JOIN. The `FOR SYSTEM_TIME AS OF 
table1.proctime` should be followed after temporal table. `proctime` is a 
[processing time attribute](time_attributes.html#processing-time) of `table1`.
-This means that it takes a snapshot of the temporal table at processing time 
when joining every record from left table.
+目前只支持 INNER JOIN 和 LEFT JOIN,`FOR SYSTEM_TIME AS OF table1.proctime` 应位于时态表之后. 
`proctime` 是 `table1` 的 [processing time 属性]({%link 
dev/table/streaming/time_attributes.zh.md %}#processing-time)。

Review comment:
   这里的链接 `{%link dev/table/streaming/time_attributes.zh.md 
%}#processing-time` 有问题,不是这个文档的问题,而是要在 
`dev/table/streaming/time_attributes.zh.md` 这个文件的对应标题前面添加 锚点(具体可以参考 
[wiki](https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications)),这个可以单独提一个
 hotfix PR

##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the 
build side table. In our example, the query is using the processing-time 
notion, so a newly appended order would always be joined with the most recent 
version of `LatestRates` when executing the operation. Note that the result is 
not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用 `processing-time` 作为处理时间,因而新增订单将始终与表 
`LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。

Review comment:
   这里的 processing-time 和 event-time 能否都翻译一下呢?翻译的话整篇文章的都进行一下翻译
   因为 [时间属性](http://localhost:4000/zh/dev/table/streaming/time_attributes.html) 
这里的都是翻译过的
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on pull request #12798: [FLINK-16087][docs-zh] Translate "Detecting Patterns" page of "Streaming Concepts" into Chinese

2020-07-08 Thread GitBox


klion26 commented on pull request #12798:
URL: https://github.com/apache/flink/pull/12798#issuecomment-655904944


   @RocMarshal thanks for the quick fix, as the change is too big, I'll give 
response asap.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12856: [FLINK-18529][hive] Query Hive table and filter by timestamp partitio…

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12856:
URL: https://github.com/apache/flink/pull/12856#issuecomment-655543024


   
   ## CI report:
   
   * fc30e2ca58b435e43e6b569f0c21347667c28c8f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4347)
 
   * b45b03a69f6b9b81cee4062e4b293b6892ed0dc2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4352)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12858: [FLINK-18534][kafka][table] Fix unstable KafkaTableITCase.testKafkaDebeziumChangelogSource

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12858:
URL: https://github.com/apache/flink/pull/12858#issuecomment-655892754


   
   ## CI report:
   
   * f0b3ee3b40e8f357d0e02f35e2486865648eda3c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4353)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12858: [FLINK-18534][kafka][table] Fix unstable KafkaTableITCase.testKafkaDebeziumChangelogSource

2020-07-08 Thread GitBox


flinkbot commented on pull request #12858:
URL: https://github.com/apache/flink/pull/12858#issuecomment-655892754


   
   ## CI report:
   
   * f0b3ee3b40e8f357d0e02f35e2486865648eda3c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12856: [FLINK-18529][hive] Query Hive table and filter by timestamp partitio…

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12856:
URL: https://github.com/apache/flink/pull/12856#issuecomment-655543024


   
   ## CI report:
   
   * fc30e2ca58b435e43e6b569f0c21347667c28c8f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4347)
 
   * b45b03a69f6b9b81cee4062e4b293b6892ed0dc2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12756: [FLINK-18296][json] add support for TIMESTAMP_WITH_LOCAL_ZONE and fix…

2020-07-08 Thread GitBox


wuchong commented on a change in pull request #12756:
URL: https://github.com/apache/flink/pull/12756#discussion_r451954436



##
File path: 
flink-formats/flink-json/src/test/java/org/apache/flink/formats/json/JsonRowDataSerDeSchemaTest.java
##
@@ -502,9 +516,17 @@ private void testParseErrors(TestSpec spec) throws 
Exception {
.json("{\"map\":{\"key1\":\"123\", \"key2\":\"abc\"}}")
.rowType(ROW(FIELD("map", MAP(STRING(), INT()
.expect(Row.of(createHashMap("key1", 123, "key2", 
null)))
-   .expectErrorMessage("Failed to deserialize JSON 
'{\"map\":{\"key1\":\"123\", \"key2\":\"abc\"}}'")
+   .expectErrorMessage("Failed to deserialize JSON 
'{\"map\":{\"key1\":\"123\", \"key2\":\"abc\"}}'"),
 
+   TestSpec
+   .json("{\"id\":\"2019-11-12T18:00:12\"}")
+   .rowType(ROW(FIELD("id", 
TIMESTAMP_WITH_LOCAL_TIME_ZONE(0
+   .expectErrorMessage("Failed to deserialize JSON 
'{\"id\":\"2019-11-12T18:00:12\"}'"),
 
+   TestSpec
+   .json("{\"id\":\"2019-11-12T18:00:12\"}")
+   .rowType(ROW(FIELD("id", 
TIMESTAMP_WITH_LOCAL_TIME_ZONE(0
+   .expectErrorMessage("Failed to deserialize JSON 
'{\"id\":\"2019-11-12T18:00:12+0800\"}'")

Review comment:
   You are right. We shouldn't use thrown here. Could you please help to 
remove the `thrown` in this test class? We can use `try catch` instead.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12756: [FLINK-18296][json] add support for TIMESTAMP_WITH_LOCAL_ZONE and fix…

2020-07-08 Thread GitBox


wuchong commented on pull request #12756:
URL: https://github.com/apache/flink/pull/12756#issuecomment-655888642


   Others looks good to me. Could you please fix the `thrown` problem?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12805: [FLINK-15221][table api]support fault-tolerant semantic for kafka table api

2020-07-08 Thread GitBox


wuchong commented on a change in pull request #12805:
URL: https://github.com/apache/flink/pull/12805#discussion_r451951070



##
File path: docs/dev/table/connectors/kafka.md
##
@@ -207,6 +214,19 @@ However, it will cause a lot of network connections 
between all the Flink instan
 
 By default, a Kafka sink ingests data with at-least-once guarantees into a 
Kafka topic if the query is executed with [checkpointing enabled]({% link 
dev/stream/state/checkpointing.md %}#enabling-and-configuring-checkpointing).
 
+With Flink's checkpointing enabled, the `kafka` and `kafka-0.11` connectors 
can provide exactly-once delivery guarantees.
+
+Besides enabling Flink's checkpointing, you can also choose three different 
modes of operating chosen by passing appropriate `sink.semantic` option:
+
+ * `NONE`: Flink will not guarantee anything. Produced records can be lost or 
they can be duplicated.
+ * `AT_LEAST_ONCE` (default setting): This guarantees that no records will be 
lost (although they can be duplicated).

Review comment:
   ```suggestion
* `at-least-once` (default setting): This guarantees that no records will 
be lost (although they can be duplicated).
   ```

##
File path: docs/dev/table/connectors/kafka.md
##
@@ -207,6 +214,19 @@ However, it will cause a lot of network connections 
between all the Flink instan
 
 By default, a Kafka sink ingests data with at-least-once guarantees into a 
Kafka topic if the query is executed with [checkpointing enabled]({% link 
dev/stream/state/checkpointing.md %}#enabling-and-configuring-checkpointing).
 
+With Flink's checkpointing enabled, the `kafka` and `kafka-0.11` connectors 
can provide exactly-once delivery guarantees.
+
+Besides enabling Flink's checkpointing, you can also choose three different 
modes of operating chosen by passing appropriate `sink.semantic` option:
+
+ * `NONE`: Flink will not guarantee anything. Produced records can be lost or 
they can be duplicated.
+ * `AT_LEAST_ONCE` (default setting): This guarantees that no records will be 
lost (although they can be duplicated).
+ * `EXACTLY_ONCE`: Kafka transactions will be used to provide exactly-once 
semantic. Whenever you write

Review comment:
   ```suggestion
* `exactly-once`: Kafka transactions will be used to provide exactly-once 
semantic. Whenever you write
   ```

##
File path: docs/dev/table/connectors/kafka.md
##
@@ -207,6 +214,19 @@ However, it will cause a lot of network connections 
between all the Flink instan
 
 By default, a Kafka sink ingests data with at-least-once guarantees into a 
Kafka topic if the query is executed with [checkpointing enabled]({% link 
dev/stream/state/checkpointing.md %}#enabling-and-configuring-checkpointing).
 
+With Flink's checkpointing enabled, the `kafka` and `kafka-0.11` connectors 
can provide exactly-once delivery guarantees.
+
+Besides enabling Flink's checkpointing, you can also choose three different 
modes of operating chosen by passing appropriate `sink.semantic` option:
+
+ * `NONE`: Flink will not guarantee anything. Produced records can be lost or 
they can be duplicated.

Review comment:
   ```suggestion
* `none`: Flink will not guarantee anything. Produced records can be lost 
or they can be duplicated.
   ```

##
File path: 
flink-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/table/Kafka010DynamicTableFactory.java
##
@@ -65,18 +69,27 @@ protected KafkaDynamicSinkBase createKafkaTableSink(
String topic,
Properties properties,
Optional> partitioner,
-   EncodingFormat> 
encodingFormat) {
+   EncodingFormat> 
encodingFormat,
+   KafkaSemantic semantic) {
 
return new Kafka010DynamicSink(
consumedDataType,
topic,
properties,
partitioner,
-   encodingFormat);
+   encodingFormat,
+   semantic);
}
 
@Override
public String factoryIdentifier() {
return IDENTIFIER;
}
+
+   @Override
+   public Set> optionalOptions() {
+   final Set> options = super.optionalOptions();
+   options.remove(SINK_SEMANTIC);

Review comment:
   Add a comment on this to explain why we remove sink semantic in 0.10





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12805: [FLINK-15221][table api]support fault-tolerant semantic for kafka table api

2020-07-08 Thread GitBox


wuchong commented on a change in pull request #12805:
URL: https://github.com/apache/flink/pull/12805#discussion_r451950734



##
File path: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/table/KafkaOptions.java
##
@@ -222,8 +226,17 @@ private static void validateSinkSemantic(ReadableConfig 
tableOptions) {
// Utilities
// 

 
-   public static String transformSemantic(String semantic){
-   return semantic.toUpperCase().replace('-', '_');
+   public static KafkaSemantic getSinkSemantic(String semantic){

Review comment:
   `SinkSemantic` or `KafkaSinkSemantic`? It maybe confusing that it also 
works for source.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12858: [FLINK-18534][kafka][table] Fix unstable KafkaTableITCase.testKafkaDebeziumChangelogSource

2020-07-08 Thread GitBox


flinkbot commented on pull request #12858:
URL: https://github.com/apache/flink/pull/12858#issuecomment-655884468


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f0b3ee3b40e8f357d0e02f35e2486865648eda3c (Thu Jul 09 
04:00:46 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18534) KafkaTableITCase.testKafkaDebeziumChangelogSource failed with "Topic 'changelog_topic' already exists"

2020-07-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18534:
---
Labels: pull-request-available test-stability  (was: test-stability)

> KafkaTableITCase.testKafkaDebeziumChangelogSource failed with "Topic 
> 'changelog_topic' already exists"
> --
>
> Key: FLINK-18534
> URL: https://issues.apache.org/jira/browse/FLINK-18534
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Kafka, Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Jark Wu
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518
> {code}
> 2020-07-08T21:14:04.1626423Z [ERROR] Failures: 
> 2020-07-08T21:14:04.1629804Z [ERROR]   
> KafkaTableITCase.testKafkaDebeziumChangelogSource:66->KafkaTestBase.createTestTopic:197
>  Create test topic : changelog_topic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'changelog_topic' 
> already exists.
> 2020-07-08T21:14:04.1630642Z [ERROR] Errors: 
> 2020-07-08T21:14:04.1630986Z [ERROR]   
> KafkaTableITCase.testKafkaDebeziumChangelogSource:83  Failed to write 
> debezium...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong opened a new pull request #12858: [FLINK-18534][kafka][table] Fix unstable KafkaTableITCase.testKafkaDebeziumChangelogSource

2020-07-08 Thread GitBox


wuchong opened a new pull request #12858:
URL: https://github.com/apache/flink/pull/12858


   
   
   
   
   ## What is the purpose of the change
   
   This fix the unstable test with "Topic 'changelog_topic' already exists" 
exception.
   
   ## Brief change log
   
   Currently `testKafkaDebeziumChangelogSource` is executed multiple times 
because the super class is `Parameterized` with different formats. The Kafka 
cluster has a known issue that the topic cleanup is not guarenteed to take 
effect immediately.
   
   However, this case only need to be executed once. Thus, we update the test 
to only execute when it is json format.
   
   ## Verifying this change
   
   Manually run in my local for hundreds of times. 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12858: [FLINK-18534][kafka][table] Fix unstable KafkaTableITCase.testKafkaDebeziumChangelogSource

2020-07-08 Thread GitBox


wuchong commented on pull request #12858:
URL: https://github.com/apache/flink/pull/12858#issuecomment-655884001


   Could you help to review this @leonardBang ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18534) KafkaTableITCase.testKafkaDebeziumChangelogSource failed with "Topic 'changelog_topic' already exists"

2020-07-08 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-18534:
---

Assignee: Jark Wu

> KafkaTableITCase.testKafkaDebeziumChangelogSource failed with "Topic 
> 'changelog_topic' already exists"
> --
>
> Key: FLINK-18534
> URL: https://issues.apache.org/jira/browse/FLINK-18534
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Kafka, Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Jark Wu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518
> {code}
> 2020-07-08T21:14:04.1626423Z [ERROR] Failures: 
> 2020-07-08T21:14:04.1629804Z [ERROR]   
> KafkaTableITCase.testKafkaDebeziumChangelogSource:66->KafkaTestBase.createTestTopic:197
>  Create test topic : changelog_topic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'changelog_topic' 
> already exists.
> 2020-07-08T21:14:04.1630642Z [ERROR] Errors: 
> 2020-07-08T21:14:04.1630986Z [ERROR]   
> KafkaTableITCase.testKafkaDebeziumChangelogSource:83  Failed to write 
> debezium...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on pull request #12851: [FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource.

2020-07-08 Thread GitBox


godfreyhe commented on pull request #12851:
URL: https://github.com/apache/flink/pull/12851#issuecomment-655857964


   @liuyongvs @wuchong I will review this today



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18533) AccumulatorLiveITCase.testStreaming hangs

2020-07-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154137#comment-17154137
 ] 

Dian Fu commented on FLINK-18533:
-

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=219e462f-e75e-506c-3671-5017d866ccf6=4c5dc768-5c82-5ab0-660d-086cb90b76a0

> AccumulatorLiveITCase.testStreaming hangs
> -
>
> Key: FLINK-18533
> URL: https://issues.apache.org/jira/browse/FLINK-18533
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56]
> {code}
> 2020-07-08T21:46:15.4438026Z Printing stack trace of Java process 40159
> 2020-07-08T21:46:15.4442864Z 
> ==
> 2020-07-08T21:46:15.4475676Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-07-08T21:46:15.7672746Z 2020-07-08 21:46:15
> 2020-07-08T21:46:15.7673349Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.242-b08 mixed mode):
> 2020-07-08T21:46:15.7673590Z 
> 2020-07-08T21:46:15.7673893Z "Attach Listener" #86 daemon prio=9 os_prio=0 
> tid=0x7fef8c025800 nid=0x1b231 runnable [0x]
> 2020-07-08T21:46:15.7674242Zjava.lang.Thread.State: RUNNABLE
> 2020-07-08T21:46:15.7674419Z 
> 2020-07-08T21:46:15.7675150Z "flink-taskexecutor-io-thread-2" #85 daemon 
> prio=5 os_prio=0 tid=0x7fef9c02 nid=0xb03a waiting on condition 
> [0x7fefac1f3000]
> 2020-07-08T21:46:15.7675964Zjava.lang.Thread.State: WAITING (parking)
> 2020-07-08T21:46:15.7676249Z  at sun.misc.Unsafe.park(Native Method)
> 2020-07-08T21:46:15.7680997Z  - parking to wait for  <0x87180a20> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 2020-07-08T21:46:15.7681506Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-07-08T21:46:15.7682009Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> 2020-07-08T21:46:15.7682666Z  at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> 2020-07-08T21:46:15.7683100Z  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> 2020-07-08T21:46:15.7683554Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> 2020-07-08T21:46:15.7684013Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-07-08T21:46:15.7684371Z  at java.lang.Thread.run(Thread.java:748)
> 2020-07-08T21:46:15.7684559Z 
> 2020-07-08T21:46:15.7685213Z "Flink-DispatcherRestEndpoint-thread-4" #84 
> daemon prio=5 os_prio=0 tid=0x7fef90431800 nid=0x9e49 waiting on 
> condition [0x7fef7df4a000]
> 2020-07-08T21:46:15.7685665Zjava.lang.Thread.State: WAITING (parking)
> 2020-07-08T21:46:15.7686052Z  at sun.misc.Unsafe.park(Native Method)
> 2020-07-08T21:46:15.7686707Z  - parking to wait for  <0x87180cc0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 2020-07-08T21:46:15.7687184Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-07-08T21:46:15.7687721Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> 2020-07-08T21:46:15.7688342Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
> 2020-07-08T21:46:15.7688935Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
> 2020-07-08T21:46:15.7689579Z  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> 2020-07-08T21:46:15.7690451Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> 2020-07-08T21:46:15.7690928Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-07-08T21:46:15.7691317Z  at java.lang.Thread.run(Thread.java:748)
> 2020-07-08T21:46:15.7691502Z 
> 2020-07-08T21:46:15.7692183Z "Flink-DispatcherRestEndpoint-thread-3" #83 
> daemon prio=5 os_prio=0 tid=0x7fefa01e2800 nid=0x9dc9 waiting on 
> condition [0x7fef7f1f4000]
> 2020-07-08T21:46:15.7692636Zjava.lang.Thread.State: WAITING (parking)
> 2020-07-08T21:46:15.7692920Z  at sun.misc.Unsafe.park(Native Method)
> 2020-07-08T21:46:15.7693647Z  - parking to wait for  <0x87180cc0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 2020-07-08T21:46:15.7694105Z  

[jira] [Updated] (FLINK-18534) KafkaTableITCase.testKafkaDebeziumChangelogSource failed with "Topic 'changelog_topic' already exists"

2020-07-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18534:

Labels: test-stability  (was: )

> KafkaTableITCase.testKafkaDebeziumChangelogSource failed with "Topic 
> 'changelog_topic' already exists"
> --
>
> Key: FLINK-18534
> URL: https://issues.apache.org/jira/browse/FLINK-18534
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / Kafka, Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518
> {code}
> 2020-07-08T21:14:04.1626423Z [ERROR] Failures: 
> 2020-07-08T21:14:04.1629804Z [ERROR]   
> KafkaTableITCase.testKafkaDebeziumChangelogSource:66->KafkaTestBase.createTestTopic:197
>  Create test topic : changelog_topic failed, 
> org.apache.kafka.common.errors.TopicExistsException: Topic 'changelog_topic' 
> already exists.
> 2020-07-08T21:14:04.1630642Z [ERROR] Errors: 
> 2020-07-08T21:14:04.1630986Z [ERROR]   
> KafkaTableITCase.testKafkaDebeziumChangelogSource:83  Failed to write 
> debezium...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18534) KafkaTableITCase.testKafkaDebeziumChangelogSource failed with "Topic 'changelog_topic' already exists"

2020-07-08 Thread Dian Fu (Jira)
Dian Fu created FLINK-18534:
---

 Summary: KafkaTableITCase.testKafkaDebeziumChangelogSource failed 
with "Topic 'changelog_topic' already exists"
 Key: FLINK-18534
 URL: https://issues.apache.org/jira/browse/FLINK-18534
 Project: Flink
  Issue Type: Test
  Components: Connectors / Kafka, Table SQL / API, Tests
Affects Versions: 1.12.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518

{code}
2020-07-08T21:14:04.1626423Z [ERROR] Failures: 
2020-07-08T21:14:04.1629804Z [ERROR]   
KafkaTableITCase.testKafkaDebeziumChangelogSource:66->KafkaTestBase.createTestTopic:197
 Create test topic : changelog_topic failed, 
org.apache.kafka.common.errors.TopicExistsException: Topic 'changelog_topic' 
already exists.
2020-07-08T21:14:04.1630642Z [ERROR] Errors: 
2020-07-08T21:14:04.1630986Z [ERROR]   
KafkaTableITCase.testKafkaDebeziumChangelogSource:83  Failed to write 
debezium...
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #12851: [FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource.

2020-07-08 Thread GitBox


wuchong commented on pull request #12851:
URL: https://github.com/apache/flink/pull/12851#issuecomment-655853748


   Hi @godfreyhe , do you have time to review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18533) AccumulatorLiveITCase.testStreaming hangs

2020-07-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18533:

Labels: test-stability  (was: )

> AccumulatorLiveITCase.testStreaming hangs
> -
>
> Key: FLINK-18533
> URL: https://issues.apache.org/jira/browse/FLINK-18533
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56]
> {code}
> 2020-07-08T21:46:15.4438026Z Printing stack trace of Java process 40159
> 2020-07-08T21:46:15.4442864Z 
> ==
> 2020-07-08T21:46:15.4475676Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-07-08T21:46:15.7672746Z 2020-07-08 21:46:15
> 2020-07-08T21:46:15.7673349Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.242-b08 mixed mode):
> 2020-07-08T21:46:15.7673590Z 
> 2020-07-08T21:46:15.7673893Z "Attach Listener" #86 daemon prio=9 os_prio=0 
> tid=0x7fef8c025800 nid=0x1b231 runnable [0x]
> 2020-07-08T21:46:15.7674242Zjava.lang.Thread.State: RUNNABLE
> 2020-07-08T21:46:15.7674419Z 
> 2020-07-08T21:46:15.7675150Z "flink-taskexecutor-io-thread-2" #85 daemon 
> prio=5 os_prio=0 tid=0x7fef9c02 nid=0xb03a waiting on condition 
> [0x7fefac1f3000]
> 2020-07-08T21:46:15.7675964Zjava.lang.Thread.State: WAITING (parking)
> 2020-07-08T21:46:15.7676249Z  at sun.misc.Unsafe.park(Native Method)
> 2020-07-08T21:46:15.7680997Z  - parking to wait for  <0x87180a20> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 2020-07-08T21:46:15.7681506Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-07-08T21:46:15.7682009Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> 2020-07-08T21:46:15.7682666Z  at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> 2020-07-08T21:46:15.7683100Z  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> 2020-07-08T21:46:15.7683554Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> 2020-07-08T21:46:15.7684013Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-07-08T21:46:15.7684371Z  at java.lang.Thread.run(Thread.java:748)
> 2020-07-08T21:46:15.7684559Z 
> 2020-07-08T21:46:15.7685213Z "Flink-DispatcherRestEndpoint-thread-4" #84 
> daemon prio=5 os_prio=0 tid=0x7fef90431800 nid=0x9e49 waiting on 
> condition [0x7fef7df4a000]
> 2020-07-08T21:46:15.7685665Zjava.lang.Thread.State: WAITING (parking)
> 2020-07-08T21:46:15.7686052Z  at sun.misc.Unsafe.park(Native Method)
> 2020-07-08T21:46:15.7686707Z  - parking to wait for  <0x87180cc0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 2020-07-08T21:46:15.7687184Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-07-08T21:46:15.7687721Z  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> 2020-07-08T21:46:15.7688342Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
> 2020-07-08T21:46:15.7688935Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
> 2020-07-08T21:46:15.7689579Z  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
> 2020-07-08T21:46:15.7690451Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
> 2020-07-08T21:46:15.7690928Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-07-08T21:46:15.7691317Z  at java.lang.Thread.run(Thread.java:748)
> 2020-07-08T21:46:15.7691502Z 
> 2020-07-08T21:46:15.7692183Z "Flink-DispatcherRestEndpoint-thread-3" #83 
> daemon prio=5 os_prio=0 tid=0x7fefa01e2800 nid=0x9dc9 waiting on 
> condition [0x7fef7f1f4000]
> 2020-07-08T21:46:15.7692636Zjava.lang.Thread.State: WAITING (parking)
> 2020-07-08T21:46:15.7692920Z  at sun.misc.Unsafe.park(Native Method)
> 2020-07-08T21:46:15.7693647Z  - parking to wait for  <0x87180cc0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 2020-07-08T21:46:15.7694105Z  at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> 2020-07-08T21:46:15.7694595Z  at 
> 

[jira] [Created] (FLINK-18533) AccumulatorLiveITCase.testStreaming hangs

2020-07-08 Thread Dian Fu (Jira)
Dian Fu created FLINK-18533:
---

 Summary: AccumulatorLiveITCase.testStreaming hangs
 Key: FLINK-18533
 URL: https://issues.apache.org/jira/browse/FLINK-18533
 Project: Flink
  Issue Type: Test
  Components: Tests
Affects Versions: 1.12.0
Reporter: Dian Fu


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56]

{code}
2020-07-08T21:46:15.4438026Z Printing stack trace of Java process 40159
2020-07-08T21:46:15.4442864Z 
==
2020-07-08T21:46:15.4475676Z Picked up JAVA_TOOL_OPTIONS: 
-XX:+HeapDumpOnOutOfMemoryError
2020-07-08T21:46:15.7672746Z 2020-07-08 21:46:15
2020-07-08T21:46:15.7673349Z Full thread dump OpenJDK 64-Bit Server VM 
(25.242-b08 mixed mode):
2020-07-08T21:46:15.7673590Z 
2020-07-08T21:46:15.7673893Z "Attach Listener" #86 daemon prio=9 os_prio=0 
tid=0x7fef8c025800 nid=0x1b231 runnable [0x]
2020-07-08T21:46:15.7674242Zjava.lang.Thread.State: RUNNABLE
2020-07-08T21:46:15.7674419Z 
2020-07-08T21:46:15.7675150Z "flink-taskexecutor-io-thread-2" #85 daemon prio=5 
os_prio=0 tid=0x7fef9c02 nid=0xb03a waiting on condition 
[0x7fefac1f3000]
2020-07-08T21:46:15.7675964Zjava.lang.Thread.State: WAITING (parking)
2020-07-08T21:46:15.7676249Zat sun.misc.Unsafe.park(Native Method)
2020-07-08T21:46:15.7680997Z- parking to wait for  <0x87180a20> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
2020-07-08T21:46:15.7681506Zat 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
2020-07-08T21:46:15.7682009Zat 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
2020-07-08T21:46:15.7682666Zat 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
2020-07-08T21:46:15.7683100Zat 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
2020-07-08T21:46:15.7683554Zat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
2020-07-08T21:46:15.7684013Zat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2020-07-08T21:46:15.7684371Zat java.lang.Thread.run(Thread.java:748)
2020-07-08T21:46:15.7684559Z 
2020-07-08T21:46:15.7685213Z "Flink-DispatcherRestEndpoint-thread-4" #84 daemon 
prio=5 os_prio=0 tid=0x7fef90431800 nid=0x9e49 waiting on condition 
[0x7fef7df4a000]
2020-07-08T21:46:15.7685665Zjava.lang.Thread.State: WAITING (parking)
2020-07-08T21:46:15.7686052Zat sun.misc.Unsafe.park(Native Method)
2020-07-08T21:46:15.7686707Z- parking to wait for  <0x87180cc0> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
2020-07-08T21:46:15.7687184Zat 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
2020-07-08T21:46:15.7687721Zat 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
2020-07-08T21:46:15.7688342Zat 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
2020-07-08T21:46:15.7688935Zat 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
2020-07-08T21:46:15.7689579Zat 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
2020-07-08T21:46:15.7690451Zat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
2020-07-08T21:46:15.7690928Zat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2020-07-08T21:46:15.7691317Zat java.lang.Thread.run(Thread.java:748)
2020-07-08T21:46:15.7691502Z 
2020-07-08T21:46:15.7692183Z "Flink-DispatcherRestEndpoint-thread-3" #83 daemon 
prio=5 os_prio=0 tid=0x7fefa01e2800 nid=0x9dc9 waiting on condition 
[0x7fef7f1f4000]
2020-07-08T21:46:15.7692636Zjava.lang.Thread.State: WAITING (parking)
2020-07-08T21:46:15.7692920Zat sun.misc.Unsafe.park(Native Method)
2020-07-08T21:46:15.7693647Z- parking to wait for  <0x87180cc0> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
2020-07-08T21:46:15.7694105Zat 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
2020-07-08T21:46:15.7694595Zat 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
2020-07-08T21:46:15.7695178Zat 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
2020-07-08T21:46:15.7695746Zat 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)

[jira] [Resolved] (FLINK-18525) Running yarn-session.sh occurs error

2020-07-08 Thread zhangyunyun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangyunyun resolved FLINK-18525.
-
Resolution: Done

> Running yarn-session.sh occurs error
> 
>
> Key: FLINK-18525
> URL: https://issues.apache.org/jira/browse/FLINK-18525
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
> Environment: hadoop-2.8.5
>  
> When use flink-1.10.1/bin/yarn-session.sh, It started successfully.
>Reporter: zhangyunyun
>Priority: Major
>
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:382)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:514)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$4(FlinkYarnSessionCli.java:751)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>  at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:751)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /tmp/application_1594196612035_0008-flink-conf.yaml3951184480005887817.tmp
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1444)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>  at 
> org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:163)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:839)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:524)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:375)
>  ... 7 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18525) Running yarn-session.sh occurs error

2020-07-08 Thread zhangyunyun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154124#comment-17154124
 ] 

zhangyunyun commented on FLINK-18525:
-

It resolved! Thank [~fly_in_gis] 

> Running yarn-session.sh occurs error
> 
>
> Key: FLINK-18525
> URL: https://issues.apache.org/jira/browse/FLINK-18525
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
> Environment: hadoop-2.8.5
>  
> When use flink-1.10.1/bin/yarn-session.sh, It started successfully.
>Reporter: zhangyunyun
>Priority: Major
>
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:382)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:514)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$4(FlinkYarnSessionCli.java:751)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>  at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:751)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /tmp/application_1594196612035_0008-flink-conf.yaml3951184480005887817.tmp
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1444)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>  at 
> org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:163)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:839)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:524)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:375)
>  ... 7 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18525) Running yarn-session.sh occurs error

2020-07-08 Thread zhangyunyun (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154121#comment-17154121
 ] 

zhangyunyun commented on FLINK-18525:
-

I configured \{{fs.default-scheme}} to a hdfs url. Let me change it and try 
again, thanks!

 

 

> Running yarn-session.sh occurs error
> 
>
> Key: FLINK-18525
> URL: https://issues.apache.org/jira/browse/FLINK-18525
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
> Environment: hadoop-2.8.5
>  
> When use flink-1.10.1/bin/yarn-session.sh, It started successfully.
>Reporter: zhangyunyun
>Priority: Major
>
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:382)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:514)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$4(FlinkYarnSessionCli.java:751)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>  at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:751)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /tmp/application_1594196612035_0008-flink-conf.yaml3951184480005887817.tmp
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1444)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>  at 
> org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:163)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:839)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:524)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:375)
>  ... 7 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16795) End to end tests timeout on Azure

2020-07-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154117#comment-17154117
 ] 

Dian Fu edited comment on FLINK-16795 at 7/9/20, 1:51 AM:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a
]
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=68a897ab-3047-5660-245a-cce8f83859f6
]
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=08866332-78f7-59e4-4f7e-49a56faa3179

] 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=6caf31d6-847a-526e-9624-468e053467d6]


was (Author: dian.fu):
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=68a897ab-3047-5660-245a-cce8f83859f6=16ca2cca-2f63-5cce-12d2-d519b930a729]

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16795) End to end tests timeout on Azure

2020-07-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154117#comment-17154117
 ] 

Dian Fu edited comment on FLINK-16795 at 7/9/20, 1:51 AM:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=68a897ab-3047-5660-245a-cce8f83859f6]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=08866332-78f7-59e4-4f7e-49a56faa3179]
 

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=6caf31d6-847a-526e-9624-468e053467d6]


was (Author: dian.fu):
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a
]
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=68a897ab-3047-5660-245a-cce8f83859f6
]
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=08866332-78f7-59e4-4f7e-49a56faa3179

] 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=6caf31d6-847a-526e-9624-468e053467d6]

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-07-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154117#comment-17154117
 ] 

Dian Fu commented on FLINK-16795:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4350=logs=68a897ab-3047-5660-245a-cce8f83859f6=16ca2cca-2f63-5cce-12d2-d519b930a729]

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16768) HadoopS3RecoverableWriterITCase.testRecoverWithStateWithMultiPart hangs

2020-07-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154115#comment-17154115
 ] 

Dian Fu commented on FLINK-16768:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4351=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=e4f347ab-2a29-5d7c-3685-b0fcd2b6b051]

> HadoopS3RecoverableWriterITCase.testRecoverWithStateWithMultiPart hangs
> ---
>
> Key: FLINK-16768
> URL: https://issues.apache.org/jira/browse/FLINK-16768
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Affects Versions: 1.10.0, 1.11.0, 1.12.0
>Reporter: Zhijiang
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> Logs: 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6584=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=d26b3528-38b0-53d2-05f7-37557c2405e4]
> {code:java}
> 2020-03-24T15:52:18.9196862Z "main" #1 prio=5 os_prio=0 
> tid=0x7fd36c00b800 nid=0xc21 runnable [0x7fd3743ce000]
> 2020-03-24T15:52:18.9197235Zjava.lang.Thread.State: RUNNABLE
> 2020-03-24T15:52:18.9197536Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-03-24T15:52:18.9197931Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-03-24T15:52:18.9198340Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-03-24T15:52:18.9198749Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-03-24T15:52:18.9199171Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-03-24T15:52:18.9199840Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-03-24T15:52:18.9200265Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-03-24T15:52:18.9200663Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-03-24T15:52:18.9201213Z  - locked <0x927583d8> (a 
> java.lang.Object)
> 2020-03-24T15:52:18.9201589Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-03-24T15:52:18.9202026Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-03-24T15:52:18.9202583Z  - locked <0x92758c00> (a 
> sun.security.ssl.AppInputStream)
> 2020-03-24T15:52:18.9203029Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-03-24T15:52:18.9203558Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-03-24T15:52:18.9204121Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-03-24T15:52:18.9204626Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-03-24T15:52:18.9205121Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9205679Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-03-24T15:52:18.9206164Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9206786Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-03-24T15:52:18.9207361Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9207839Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9208327Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9208809Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-03-24T15:52:18.9209273Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9210003Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-03-24T15:52:18.9210658Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> 2020-03-24T15:52:18.9211154Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-03-24T15:52:18.9211631Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1936375962.execute(Unknown 
> Source)
> 2020-03-24T15:52:18.9212044Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-03-24T15:52:18.9212553Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-03-24T15:52:18.9212972Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1457226878.execute(Unknown Source)
> 2020-03-24T15:52:18.9213408Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> 2020-03-24T15:52:18.9213866Z  at 
> 

[GitHub] [flink] liuyongvs commented on pull request #12851: [FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource.

2020-07-08 Thread GitBox


liuyongvs commented on pull request #12851:
URL: https://github.com/apache/flink/pull/12851#issuecomment-655842803


   hi @wuchong , azure passed, could you speed some of time reviewing this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18525) Running yarn-session.sh occurs error

2020-07-08 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154111#comment-17154111
 ] 

Yang Wang commented on FLINK-18525:
---

It is strange that the 
{{YarnApplicationFileUploader.registerSingleLocalResource}} runs into getting 
file status from hdfs. Actually, it is a local file.

 

Do you have configured {{fs.default-scheme}} to a hdfs schema? By default, it 
is a local scheme.

> Running yarn-session.sh occurs error
> 
>
> Key: FLINK-18525
> URL: https://issues.apache.org/jira/browse/FLINK-18525
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.11.0
> Environment: hadoop-2.8.5
>  
> When use flink-1.10.1/bin/yarn-session.sh, It started successfully.
>Reporter: zhangyunyun
>Priority: Major
>
> org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't 
> deploy Yarn session cluster
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:382)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:514)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$4(FlinkYarnSessionCli.java:751)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>  at 
> org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:751)
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /tmp/application_1594196612035_0008-flink-conf.yaml3951184480005887817.tmp
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1444)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
>  at 
> org.apache.flink.yarn.YarnApplicationFileUploader.registerSingleLocalResource(YarnApplicationFileUploader.java:163)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:839)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:524)
>  at 
> org.apache.flink.yarn.YarnClusterDescriptor.deploySessionCluster(YarnClusterDescriptor.java:375)
>  ... 7 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] miaogr86 commented on pull request #12848: Release 1.11 java.lang.IllegalStateException: No ExecutorFactory found to execute the application.

2020-07-08 Thread GitBox


miaogr86 commented on pull request #12848:
URL: https://github.com/apache/flink/pull/12848#issuecomment-655831527


   @haijohn thx



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18498) Update Flink Playgrounds to 1.11

2020-07-08 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154015#comment-17154015
 ] 

Seth Wiesman commented on FLINK-18498:
--

Fixed in master: a18385d7202f39cd3eb5e4ae7c0c91fdb6243c2c
release-1.11: a18385d7202f39cd3eb5e4ae7c0c91fdb6243c2c

> Update Flink Playgrounds to 1.11
> 
>
> Key: FLINK-18498
> URL: https://issues.apache.org/jira/browse/FLINK-18498
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation / Training / Exercises
>Affects Versions: 1.11.0
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Major
>  Labels: pull-request-available
>
> The Flink Playgrounds need to be updated to Flink 1.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-18498) Update Flink Playgrounds to 1.11

2020-07-08 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman resolved FLINK-18498.
--
Resolution: Fixed

> Update Flink Playgrounds to 1.11
> 
>
> Key: FLINK-18498
> URL: https://issues.apache.org/jira/browse/FLINK-18498
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation / Training / Exercises
>Affects Versions: 1.11.0
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Major
>  Labels: pull-request-available
>
> The Flink Playgrounds need to be updated to Flink 1.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-playgrounds] sjwiesman closed pull request #15: [FLINK-18498] Update playgrounds for 1.11 release

2020-07-08 Thread GitBox


sjwiesman closed pull request #15:
URL: https://github.com/apache/flink-playgrounds/pull/15


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18510) Job Listener Interface failed

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-18510:
-
Priority: Major  (was: Blocker)

> Job Listener Interface  failed
> --
>
> Key: FLINK-18510
> URL: https://issues.apache.org/jira/browse/FLINK-18510
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.10.0
> Environment: The implementation class that uses this interface in the 
> development environment can return the start and end of the task, but there 
> is no return in the cluster environment.
>Reporter: Edsion_Lin
>Priority: Major
>  Labels: JobListener
>
>  
>  After implementing the Job Listener interface, and the listener has been 
> registered in the current job. But the bottom two methods in the listener are 
> not called.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18510) Job Listener Interface failed

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-18510.

Resolution: Duplicate

> Job Listener Interface  failed
> --
>
> Key: FLINK-18510
> URL: https://issues.apache.org/jira/browse/FLINK-18510
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.10.0
> Environment: The implementation class that uses this interface in the 
> development environment can return the start and end of the task, but there 
> is no return in the cluster environment.
>Reporter: Edsion_Lin
>Priority: Blocker
>  Labels: JobListener
>
>  
>  After implementing the Job Listener interface, and the listener has been 
> registered in the current job. But the bottom two methods in the listener are 
> not called.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18097) History server doesn't clean all job json files

2020-07-08 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153865#comment-17153865
 ] 

Chesnay Schepler edited comment on FLINK-18097 at 7/8/20, 7:11 PM:
---

master: 78d6ee1cef6d9c0d5d28e4e228f7050de1c6c7a2
1.11: d64a5a0c2566416158c67719dc08d7623e14fad8
1.10: 0e7278929bcf08e4e51f791a34c0220709c5e6ad 


was (Author: zentol):
master: 78d6ee1cef6d9c0d5d28e4e228f7050de1c6c7a2
 1.11: d64a5a0c2566416158c67719dc08d7623e14fad8

1.10: 0e7278929bcf08e4e51f791a34c0220709c5e6ad 

> History server doesn't clean all job json files
> ---
>
> Key: FLINK-18097
> URL: https://issues.apache.org/jira/browse/FLINK-18097
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.1
>Reporter: Milan Nikl
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.2, 1.12.0, 1.11.1
>
>
> Improvement introduced in https://issues.apache.org/jira/browse/FLINK-14169 
> does not delete all files in the history server folders completely.
> There is a [json file created for each 
> job|https://github.com/apache/flink/blob/master/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java#L237-L238]
>  in history server's {{webDir/jobs/}} directory. Such file is not deleted by 
> {{cleanupExpiredJobs}}.
> And while the cleaned up job is no longer visible in History server's 
> {{Completed Jobs List}} in web UI, it can be still accessed on 
> {{/#/job//overview}}.
> While this bug probably won't lead to any serious issues, files in history 
> server's folders should be cleaned up thoroughly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18097) History server doesn't clean all job json files

2020-07-08 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153865#comment-17153865
 ] 

Chesnay Schepler edited comment on FLINK-18097 at 7/8/20, 7:11 PM:
---

master: 78d6ee1cef6d9c0d5d28e4e228f7050de1c6c7a2
 1.11: d64a5a0c2566416158c67719dc08d7623e14fad8

1.10: 0e7278929bcf08e4e51f791a34c0220709c5e6ad 


was (Author: zentol):
master: 78d6ee1cef6d9c0d5d28e4e228f7050de1c6c7a2
1.11:d64a5a0c2566416158c67719dc08d7623e14fad8

1.10: 0e7278929bcf08e4e51f791a34c0220709c5e6ad 

> History server doesn't clean all job json files
> ---
>
> Key: FLINK-18097
> URL: https://issues.apache.org/jira/browse/FLINK-18097
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.1
>Reporter: Milan Nikl
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.2, 1.12.0, 1.11.1
>
>
> Improvement introduced in https://issues.apache.org/jira/browse/FLINK-14169 
> does not delete all files in the history server folders completely.
> There is a [json file created for each 
> job|https://github.com/apache/flink/blob/master/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java#L237-L238]
>  in history server's {{webDir/jobs/}} directory. Such file is not deleted by 
> {{cleanupExpiredJobs}}.
> And while the cleaned up job is no longer visible in History server's 
> {{Completed Jobs List}} in web UI, it can be still accessed on 
> {{/#/job//overview}}.
> While this bug probably won't lead to any serious issues, files in history 
> server's folders should be cleaned up thoroughly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18097) History server doesn't clean all job json files

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-18097.

Fix Version/s: 1.11.1
   1.12.0
   1.10.2
   Resolution: Fixed

master: 78d6ee1cef6d9c0d5d28e4e228f7050de1c6c7a2
1.11:d64a5a0c2566416158c67719dc08d7623e14fad8

1.10: 0e7278929bcf08e4e51f791a34c0220709c5e6ad 

> History server doesn't clean all job json files
> ---
>
> Key: FLINK-18097
> URL: https://issues.apache.org/jira/browse/FLINK-18097
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.1
>Reporter: Milan Nikl
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.10.2, 1.12.0, 1.11.1
>
>
> Improvement introduced in https://issues.apache.org/jira/browse/FLINK-14169 
> does not delete all files in the history server folders completely.
> There is a [json file created for each 
> job|https://github.com/apache/flink/blob/master/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java#L237-L238]
>  in history server's {{webDir/jobs/}} directory. Such file is not deleted by 
> {{cleanupExpiredJobs}}.
> And while the cleaned up job is no longer visible in History server's 
> {{Completed Jobs List}} in web UI, it can be still accessed on 
> {{/#/job//overview}}.
> While this bug probably won't lead to any serious issues, files in history 
> server's folders should be cleaned up thoroughly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16463) CodeGenUtils generates code that has two semicolons for GroupingWindowAggsHandler in blink

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-16463:
-
Issue Type: Bug  (was: Wish)

> CodeGenUtils generates code that has two semicolons for 
> GroupingWindowAggsHandler in blink 
> ---
>
> Key: FLINK-16463
> URL: https://issues.apache.org/jira/browse/FLINK-16463
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: hehuiyuan
>Assignee: hehuiyuan
>Priority: Minor
> Fix For: 1.11.0
>
> Attachments: image-2020-03-06-20-43-20-300.png, 
> image-2020-03-06-20-44-16-446.png
>
>
> !image-2020-03-06-20-43-20-300.png|width=452,height=297!
>  
> !image-2020-03-06-20-44-16-446.png|width=513,height=282!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12857: [FLINK-18520][table] Fix unresolvable catalog table functions

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12857:
URL: https://github.com/apache/flink/pull/12857#issuecomment-655543169


   
   ## CI report:
   
   * d52c63e8e43456629c23a571eefcb60bd26a11be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4348)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-15617) Remove useless JobRetrievalException

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-15617:
-
Issue Type: Improvement  (was: Wish)

> Remove useless JobRetrievalException
> 
>
> Key: FLINK-15617
> URL: https://issues.apache.org/jira/browse/FLINK-15617
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, the exception class {{JobRetrievalException}} has not been used 
> anywhere in Flink codebase. IMO, we can remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15601) Remove useless constant field NUM_STOP_CALL_TRIES in Execution

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-15601:
-
Issue Type: Improvement  (was: Wish)

> Remove useless constant field NUM_STOP_CALL_TRIES in Execution
> --
>
> Key: FLINK-15601
> URL: https://issues.apache.org/jira/browse/FLINK-15601
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, the constant field {{NUM_STOP_CALL_TRIES}} in {{Execution}} is not 
> been used. IMO, we can remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15558) Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-15558:
-
Issue Type: Improvement  (was: Wish)

> Bump Elasticsearch version from 7.3.2 to 7.5.1 for es7 connector
> 
>
> Key: FLINK-15558
> URL: https://issues.apache.org/jira/browse/FLINK-15558
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> It would be better to track the newest ES 7.x client version just like we 
> have done for Kafka universal connector.
> Currently, the ES7 connector track version 7.3.2 and the latest ES 7.x 
> version is 7.5.1. We can upgrade it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14351) Refactor MetricRegistry delimiter retrieval into separate interface

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-14351:
-
Fix Version/s: (was: 1.11.0)
   1.12.0

> Refactor MetricRegistry delimiter retrieval into separate interface
> ---
>
> Key: FLINK-14351
> URL: https://issues.apache.org/jira/browse/FLINK-14351
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The MetricRegistry offers a few methods for retrieving configured delimiters, 
> which are used a fair bit during scope operations; however other methods 
> aren't being used in these contexts.
> Hence we could reduce access and simplify testing by introducing a dedicated 
> interface for these methods that the registry extends.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12336) Add HTTPS support to InfluxDB reporter

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-12336:
-
Fix Version/s: (was: 1.11.0)
   1.12.0

> Add HTTPS support to InfluxDB reporter
> --
>
> Key: FLINK-12336
> URL: https://issues.apache.org/jira/browse/FLINK-12336
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Metrics
>Affects Versions: 1.8.0
>Reporter: Etienne Carriere
>Assignee: Etienne Carriere
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the flink-metrics-influxdb connector works only with HTTP InfluxDB 
> Endpoint. 
> Proposal : manages HTTPS InfluxDB Endpoint



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16795) End to end tests timeout on Azure

2020-07-08 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153821#comment-17153821
 ] 

Dawid Wysakowicz edited comment on FLINK-16795 at 7/8/20, 6:03 PM:
---

New instances:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4334=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4333=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4340=results

Just an observation that when this happens I can not see any logs in the e2e 
view. (maybe its just my thing?)


was (Author: dawidwys):
New instances:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4334=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4333=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee

Just an observation that when this happens I can not see any logs in the e2e 
view. (maybe its just my thing?)

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-16795) End to end tests timeout on Azure

2020-07-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz reopened FLINK-16795:
--

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16795) End to end tests timeout on Azure

2020-07-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-16795:
-
Affects Version/s: 1.12.0

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16795) End to end tests timeout on Azure

2020-07-08 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153821#comment-17153821
 ] 

Dawid Wysakowicz commented on FLINK-16795:
--

New instances:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4334=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4333=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee

Just an observation that when this happens I can not see any logs in the e2e 
view. (maybe its just my thing?)

> End to end tests timeout on Azure
> -
>
> Key: FLINK-16795
> URL: https://issues.apache.org/jira/browse/FLINK-16795
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: image.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Example: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6650=logs=08866332-78f7-59e4-4f7e-49a56faa3179
>  or 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6637=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}##[error]The job running on agent Azure Pipelines 6 ran longer than the 
> maximum time of 200 minutes. For more information, see 
> https://go.microsoft.com/fwlink/?linkid=2077134
> {code}
> and {code}##[error]The operation was canceled.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16510) Task manager safeguard shutdown may not be reliable

2020-07-08 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153813#comment-17153813
 ] 

Thomas Weise commented on FLINK-16510:
--

We are not able to reliably run our applications on k8s when pods get stuck 
during termination on a fatal task manager error. When pods don't exit our 
infrastructure cannot replace the task manager and applications cannot recover. 
We have seen this issue many times and we were able to reproduce it with 
benchmarks that produce intermittent OOMs. Based on the analysis from [~mxm] we 
have applied this change to our fork:

[https://github.com/lyft/flink/commit/4787e4d638c5b299164b85e7e492967bf573c400]

We would like to address this issue upstream though. When a fatal error occurs, 
the process should safely terminate. Triggering shutdown hooks is unlikely to 
succeed. It is important that we get a fresh TM deployed to allow for job 
recovery and forward progress (avoid extended downtime and need for manual 
intervention).

Do you see any downside using the hard stop instead of System.exit?

Currently, there are multiple occurrences of System.exit - for everything that 
aims to "exitOnFatalError" it would be nice to centralize. 

> Task manager safeguard shutdown may not be reliable
> ---
>
> Key: FLINK-16510
> URL: https://issues.apache.org/jira/browse/FLINK-16510
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Major
>
> The {{JvmShutdownSafeguard}} does not always succeed but can hang when 
> multiple threads attempt to shutdown the JVM. Apparently mixing 
> {{System.exit()}} with ShutdownHooks and forcefully terminating the JVM via 
> {{Runtime.halt()}} does not play together well:
> {noformat}
> "Jvm Terminator" #22 daemon prio=5 os_prio=0 tid=0x7fb8e82f2800 
> nid=0x5a96 runnable [0x7fb35cffb000]
>java.lang.Thread.State: RUNNABLE
>   at java.lang.Shutdown.$$YJP$$halt0(Native Method)
>   at java.lang.Shutdown.halt0(Shutdown.java)
>   at java.lang.Shutdown.halt(Shutdown.java:139)
>   - locked <0x00047ed67638> (a java.lang.Shutdown$Lock)
>   at java.lang.Runtime.halt(Runtime.java:276)
>   at 
> org.apache.flink.runtime.util.JvmShutdownSafeguard$DelayedTerminator.run(JvmShutdownSafeguard.java:86)
>   at java.lang.Thread.run(Thread.java:748)
>Locked ownable synchronizers:
>   - None
> "FlinkCompletableFutureDelayScheduler-thread-1" #18154 daemon prio=5 
> os_prio=0 tid=0x7fb708a7d000 nid=0x5a8a waiting for monitor entry 
> [0x7fb289d49000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at java.lang.Shutdown.halt(Shutdown.java:139)
>   - waiting to lock <0x00047ed67638> (a java.lang.Shutdown$Lock)
>   at java.lang.Shutdown.exit(Shutdown.java:213)
>   - locked <0x00047edb7348> (a java.lang.Class for java.lang.Shutdown)
>   at java.lang.Runtime.exit(Runtime.java:110)
>   at java.lang.System.exit(System.java:973)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.terminateJVM(TaskManagerRunner.java:266)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$onFatalError$1(TaskManagerRunner.java:260)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunner$$Lambda$27464/1464672548.accept(Unknown
>  Source)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:943)
>   at 
> org.apache.flink.runtime.concurrent.DirectExecutorService.execute(DirectExecutorService.java:211)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$orTimeout$11(FutureUtils.java:361)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$$Lambda$27435/159015392.run(Unknown
>  Source)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #12856: [FLINK-18529][hive] Query Hive table and filter by timestamp partitio…

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12856:
URL: https://github.com/apache/flink/pull/12856#issuecomment-655543024


   
   ## CI report:
   
   * fc30e2ca58b435e43e6b569f0c21347667c28c8f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4347)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18532) Remove Beta tag from MATCH_RECOGNIZE docs

2020-07-08 Thread Seth Wiesman (Jira)
Seth Wiesman created FLINK-18532:


 Summary: Remove Beta tag from MATCH_RECOGNIZE docs
 Key: FLINK-18532
 URL: https://issues.apache.org/jira/browse/FLINK-18532
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.12.0
Reporter: Seth Wiesman
Assignee: Seth Wiesman






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18531) Minicluster option to reuse same port if 0 is applied

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-18531:
-
Component/s: Runtime / REST
 Runtime / Configuration

> Minicluster option to reuse same port if 0 is applied
> -
>
> Key: FLINK-18531
> URL: https://issues.apache.org/jira/browse/FLINK-18531
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Configuration, Runtime / REST
>Reporter: David Chen
>Priority: Minor
>
> If the minicluster port is set to 0, can we have it able to reuse the same 
> port after it's been closed and started again? The reason for this use case 
> is simply that we have the port set to 0 but we'd like to restart the 
> minicluster in order to get rid of completed/expired jobs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18531) Minicluster option to reuse same port if 0 is applied

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-18531.

Resolution: Won't Fix

This is not possible.

If you want the port to remain stable, you will have to configure it explicitly.

Note that port ranges can also be configured, and the ports in the range will 
be used in order; i.e., if you configure 1-10005, if no other process ever 
uses port 1, Flink will always use it.

> Minicluster option to reuse same port if 0 is applied
> -
>
> Key: FLINK-18531
> URL: https://issues.apache.org/jira/browse/FLINK-18531
> Project: Flink
>  Issue Type: Wish
>Reporter: David Chen
>Priority: Minor
>
> If the minicluster port is set to 0, can we have it able to reuse the same 
> port after it's been closed and started again? The reason for this use case 
> is simply that we have the port set to 0 but we'd like to restart the 
> minicluster in order to get rid of completed/expired jobs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12851: [FLINK-17425][blink-planner] supportsFilterPushDown rule in DynamicSource.

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12851:
URL: https://github.com/apache/flink/pull/12851#issuecomment-655384097


   
   ## CI report:
   
   * 7a1f2dd7590a3b1816e619db2c80d543cbdcf7d2 UNKNOWN
   * b79337059dea45b43d88cf3e4b07309dcd711d2b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4342)
 
   * 423bccc86580b6a811598fb1dc2f27fce607a1e2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4336)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18531) Minicluster option to reuse same port if 0 is applied

2020-07-08 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-18531:
-
Issue Type: Wish  (was: Test)

> Minicluster option to reuse same port if 0 is applied
> -
>
> Key: FLINK-18531
> URL: https://issues.apache.org/jira/browse/FLINK-18531
> Project: Flink
>  Issue Type: Wish
>Reporter: David Chen
>Priority: Minor
>
> If the minicluster port is set to 0, can we have it able to reuse the same 
> port after it's been closed and started again? The reason for this use case 
> is simply that we have the port set to 0 but we'd like to restart the 
> minicluster in order to get rid of completed/expired jobs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18531) Minicluster option to reuse same port if 0 is applied

2020-07-08 Thread David Chen (Jira)
David Chen created FLINK-18531:
--

 Summary: Minicluster option to reuse same port if 0 is applied
 Key: FLINK-18531
 URL: https://issues.apache.org/jira/browse/FLINK-18531
 Project: Flink
  Issue Type: Test
Reporter: David Chen


If the minicluster port is set to 0, can we have it able to reuse the same port 
after it's been closed and started again? The reason for this use case is 
simply that we have the port set to 0 but we'd like to restart the minicluster 
in order to get rid of completed/expired jobs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12846: [FLINK-18448][pubsub] Update Google Cloud PubSub dependencies

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12846:
URL: https://github.com/apache/flink/pull/12846#issuecomment-654978676


   
   ## CI report:
   
   * 5b3a4da61a337a80a70a5125ed135d3095def41e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4337)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12852: [FLINK-17000][table] Ensure that every logical type can be represented as TypeInformation

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12852:
URL: https://github.com/apache/flink/pull/12852#issuecomment-655468923


   
   ## CI report:
   
   * 5b8d05081345063348bad6375c2567a0ebf59bed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4338)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18520) New Table Function type inference fails

2020-07-08 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153698#comment-17153698
 ] 

Benchao Li commented on FLINK-18520:


[~twalthr] Thanks for your quick fix, I've reviewed the PR.

> New Table Function type inference fails
> ---
>
> Key: FLINK-18520
> URL: https://issues.apache.org/jira/browse/FLINK-18520
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Benchao Li
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.1
>
>
> For a simple UDTF like 
> {code:java}
> public class Split extends TableFunction {
>   public Split(){}
>   public void eval(String str, String ch) {
>   if (str == null || str.isEmpty()) {
>   return;
>   } else {
>   String[] ss = str.split(ch);
>   for (String s : ss) {
>   collect(s);
>   }
>   }
>   }
>   }
> {code}
> register it using new function type inference 
> {{tableEnv.createFunction("my_split", Split.class);}} and using it in a 
> simple query will fail with following exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 93 to line 1, column 115: No match 
> found for function signature my_split(, )
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:716)
>   at com.bytedance.demo.SqlTest.main(SqlTest.java:64)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 93 to line 1, column 115: No match found for function signature 
> my_split(, )
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:457)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:839)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5089)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.handleUnresolvedFunction(SqlValidatorImpl.java:1882)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:305)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5858)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5845)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1800)
>   at 
> org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:57)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1110)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1084)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3256)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3238)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateJoin(SqlValidatorImpl.java:3303)
> 

[GitHub] [flink] libenchao commented on a change in pull request #12857: [FLINK-18520][table] Fix unresolvable catalog table functions

2020-07-08 Thread GitBox


libenchao commented on a change in pull request #12857:
URL: https://github.com/apache/flink/pull/12857#discussion_r451634594



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/FunctionCatalogOperatorTable.java
##
@@ -208,11 +209,17 @@ private boolean verifyFunctionKind(
// it would be nice to give a more meaningful exception when a 
scalar function is used instead
// of a table function and vice versa, but we can do that only 
once FLIP-51 is implemented
 
-   if (definition.getKind() == FunctionKind.SCALAR &&
-   (category == 
SqlFunctionCategory.USER_DEFINED_FUNCTION || category == 
SqlFunctionCategory.SYSTEM)) {
+   if (definition.getKind() == FunctionKind.SCALAR) {
+   if (category != null && category.isTableFunction()) {
+   throw new ValidationException(
+   String.format(
+   "Function '%s' cannot be used 
as a table function.",
+   identifier.asSummaryString()
+   )
+   );
+   }
return true;
-   } else if (definition.getKind() == FunctionKind.TABLE &&
-   (category == 
SqlFunctionCategory.USER_DEFINED_TABLE_FUNCTION || category == 
SqlFunctionCategory.SYSTEM)) {
+   } else if (definition.getKind() == FunctionKind.TABLE) {

Review comment:
   How about we also check `category` should be table function for this 
branch?  
   I found that if we use a table function like "SELECT my_udtf(col) FROM T", 
the exception is a little wierd.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * bc4b8b49834d751271c7f0976f62f91923217420 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18499) Update Flink Exercises to 1.11

2020-07-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18499:
---
Labels: pull-request-available  (was: )

> Update Flink Exercises to 1.11
> --
>
> Key: FLINK-18499
> URL: https://issues.apache.org/jira/browse/FLINK-18499
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation / Training / Exercises
>Affects Versions: 1.11.0
>Reporter: David Anderson
>Assignee: David Anderson
>Priority: Major
>  Labels: pull-request-available
>
> The training exercises need to be updated for Flink 1.11. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-training] alpinegizmo opened a new pull request #12: [FLINK-18499] Update for Flink 1.11: don’t use deprecated forms of keyBy

2020-07-08 Thread GitBox


alpinegizmo opened a new pull request #12:
URL: https://github.com/apache/flink-training/pull/12


   Clean-up the keyBys for Flink 1.11.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-playgrounds] sjwiesman closed pull request #14: [hotfix][walkthrough] Update Table API Walkthrough

2020-07-08 Thread GitBox


sjwiesman closed pull request #14:
URL: https://github.com/apache/flink-playgrounds/pull/14


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-playgrounds] sjwiesman commented on pull request #14: [hotfix][walkthrough] Update Table API Walkthrough

2020-07-08 Thread GitBox


sjwiesman commented on pull request #14:
URL: https://github.com/apache/flink-playgrounds/pull/14#issuecomment-655590061


   Thank you 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12856: [FLINK-18529][hive] Query Hive table and filter by timestamp partitio…

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12856:
URL: https://github.com/apache/flink/pull/12856#issuecomment-655543024


   
   ## CI report:
   
   * fc30e2ca58b435e43e6b569f0c21347667c28c8f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4347)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12857: [FLINK-18520][table] Fix unresolvable catalog table functions

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12857:
URL: https://github.com/apache/flink/pull/12857#issuecomment-655543169


   
   ## CI report:
   
   * d52c63e8e43456629c23a571eefcb60bd26a11be Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4348)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986)
 
   * bc4b8b49834d751271c7f0976f62f91923217420 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol closed pull request #12844: [FLINK-18519][REST] Propagate exception to client when application fails to execute.

2020-07-08 Thread GitBox


zentol closed pull request #12844:
URL: https://github.com/apache/flink/pull/12844


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-07-08 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r451581064



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表]({%link 
dev/table/streaming/dynamic_tables.zh.md %})中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using 
either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in 
[Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl 
}}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API]({%link dev/table/sql/tableApi.zh.md 
%}#joins) 和 [SQL]({%link dev/table/sql/queries.zh.md %}#joins) 中的 Join 章节。

Review comment:
   Done, thanks for reviewing.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12853: [FLINK-18524][table-common] Fix type inference for Scala varargs

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12853:
URL: https://github.com/apache/flink/pull/12853#issuecomment-655469013


   
   ## CI report:
   
   * dac055b0d95b136897bd699014290322c4dc22ce Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4339)
 
   * c7ca84b229988ddc46b9b7c110519b74785e10a4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4346)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12857: [FLINK-18520][table] Fix unresolvable catalog table functions

2020-07-08 Thread GitBox


flinkbot commented on pull request #12857:
URL: https://github.com/apache/flink/pull/12857#issuecomment-655543169


   
   ## CI report:
   
   * d52c63e8e43456629c23a571eefcb60bd26a11be UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12856: [FLINK-18529][hive] Query Hive table and filter by timestamp partitio…

2020-07-08 Thread GitBox


flinkbot commented on pull request #12856:
URL: https://github.com/apache/flink/pull/12856#issuecomment-655543024


   
   ## CI report:
   
   * fc30e2ca58b435e43e6b569f0c21347667c28c8f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12853: [FLINK-18524][table-common] Fix type inference for Scala varargs

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12853:
URL: https://github.com/apache/flink/pull/12853#issuecomment-655469013


   
   ## CI report:
   
   * dac055b0d95b136897bd699014290322c4dc22ce Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4339)
 
   * c7ca84b229988ddc46b9b7c110519b74785e10a4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18520) New Table Function type inference fails

2020-07-08 Thread Timo Walther (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153617#comment-17153617
 ] 

Timo Walther commented on FLINK-18520:
--

I opened a PR for this problem. [~libenchao] could you review this?

> New Table Function type inference fails
> ---
>
> Key: FLINK-18520
> URL: https://issues.apache.org/jira/browse/FLINK-18520
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Benchao Li
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.1
>
>
> For a simple UDTF like 
> {code:java}
> public class Split extends TableFunction {
>   public Split(){}
>   public void eval(String str, String ch) {
>   if (str == null || str.isEmpty()) {
>   return;
>   } else {
>   String[] ss = str.split(ch);
>   for (String s : ss) {
>   collect(s);
>   }
>   }
>   }
>   }
> {code}
> register it using new function type inference 
> {{tableEnv.createFunction("my_split", Split.class);}} and using it in a 
> simple query will fail with following exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 93 to line 1, column 115: No match 
> found for function signature my_split(, )
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:716)
>   at com.bytedance.demo.SqlTest.main(SqlTest.java:64)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 93 to line 1, column 115: No match found for function signature 
> my_split(, )
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:457)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:839)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5089)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.handleUnresolvedFunction(SqlValidatorImpl.java:1882)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:305)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5858)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5845)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1800)
>   at 
> org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:57)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1110)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1084)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3256)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3238)
>   at 
> 

[GitHub] [flink] flinkbot commented on pull request #12857: [FLINK-18520][table] Fix unresolvable catalog table functions

2020-07-08 Thread GitBox


flinkbot commented on pull request #12857:
URL: https://github.com/apache/flink/pull/12857#issuecomment-655528313


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d52c63e8e43456629c23a571eefcb60bd26a11be (Wed Jul 08 
13:42:29 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12856: [FLINK-18529][hive] Query Hive table and filter by timestamp partitio…

2020-07-08 Thread GitBox


flinkbot commented on pull request #12856:
URL: https://github.com/apache/flink/pull/12856#issuecomment-655527283


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fc30e2ca58b435e43e6b569f0c21347667c28c8f (Wed Jul 08 
13:40:48 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18529).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18520) New Table Function type inference fails

2020-07-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18520:
---
Labels: pull-request-available  (was: )

> New Table Function type inference fails
> ---
>
> Key: FLINK-18520
> URL: https://issues.apache.org/jira/browse/FLINK-18520
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Benchao Li
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.1
>
>
> For a simple UDTF like 
> {code:java}
> public class Split extends TableFunction {
>   public Split(){}
>   public void eval(String str, String ch) {
>   if (str == null || str.isEmpty()) {
>   return;
>   } else {
>   String[] ss = str.split(ch);
>   for (String s : ss) {
>   collect(s);
>   }
>   }
>   }
>   }
> {code}
> register it using new function type inference 
> {{tableEnv.createFunction("my_split", Split.class);}} and using it in a 
> simple query will fail with following exception:
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 93 to line 1, column 115: No match 
> found for function signature my_split(, )
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:108)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:187)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:204)
>   at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:716)
>   at com.bytedance.demo.SqlTest.main(SqlTest.java:64)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 93 to line 1, column 115: No match found for function signature 
> my_split(, )
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:457)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:839)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:5089)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.handleUnresolvedFunction(SqlValidatorImpl.java:1882)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:305)
>   at org.apache.calcite.sql.SqlFunction.deriveType(SqlFunction.java:218)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5858)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:5845)
>   at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1800)
>   at 
> org.apache.calcite.sql.validate.ProcedureNamespace.validateImpl(ProcedureNamespace.java:57)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1110)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1084)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3256)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateFrom(SqlValidatorImpl.java:3238)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateJoin(SqlValidatorImpl.java:3303)
>   at 
> 

[jira] [Updated] (FLINK-18529) Query Hive table and filter by timestamp partition can fail

2020-07-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18529:
---
Labels: pull-request-available  (was: )

> Query Hive table and filter by timestamp partition can fail
> ---
>
> Key: FLINK-18529
> URL: https://issues.apache.org/jira/browse/FLINK-18529
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
>
> The following example
> {code}
> create table foo (x int) partitioned by (ts timestamp);
> select x from foo where timestamp '2020-07-08 13:08:14' = ts;
> {code}
> fails with
> {noformat}
> CatalogException: HiveCatalog currently only supports timestamp of precision 9
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] twalthr opened a new pull request #12857: [FLINK-18520][table] Fix unresolvable catalog table functions

2020-07-08 Thread GitBox


twalthr opened a new pull request #12857:
URL: https://github.com/apache/flink/pull/12857


   ## What is the purpose of the change
   
   Improves the validation logic for scalar/table function usage. It made 
catalog functions unresolvable before.
   
   ## Brief change log
   
   Remove checking for SQL function category.
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as `FunctionITCase`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache opened a new pull request #12856: [FLINK-18529][hive] Query Hive table and filter by timestamp partitio…

2020-07-08 Thread GitBox


lirui-apache opened a new pull request #12856:
URL: https://github.com/apache/flink/pull/12856


   …n can fail
   
   
   
   ## What is the purpose of the change
   
   Fix the issue that querying a Hive table and filter by timestamp partition 
column can fail.
   
   
   ## Brief change log
   
 - Don't try to generate date/timestamp literals in `ExpressionExtractor`, 
because such filters cannot be pushed down anyway.
 - Add more test cases
   
   
   ## Verifying this change
   
   Existing and added tests
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18502) Add the page 'legacySourceSinks.zh.md' into the directory 'docs/dev/table'

2020-07-08 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153572#comment-17153572
 ] 

Roc Marshal edited comment on FLINK-18502 at 7/8/20, 1:27 PM:
--

Hi,[~jark], 

I tried to fix both problems in the JIRA(PR 
[12854|https://github.com/apache/flink/pull/12854]), but it seems that the 
[code contribution 
guide|https://flink.apache.org/contributing/contribute-code.html] doesn't allow 
that.

If it is necessary to submit tasks separately, could you assignee another 
JiraFLINK-18502  to me?

Thank you.


was (Author: rocmarshal):
Hi,[~jark], 

I tried to fix both problems in the this JIRA(PR 
[12854|https://github.com/apache/flink/pull/12854]), but it seems that the 
[code contribution 
guide|https://flink.apache.org/contributing/contribute-code.html] doesn't allow 
that.

If it is necessary to submit tasks separately, could you assignee another 
Jira[FLINK-18502]  to me?



Thank you.

> Add the page 'legacySourceSinks.zh.md'  into the directory 'docs/dev/table' 
> 
>
> Key: FLINK-18502
> URL: https://issues.apache.org/jira/browse/FLINK-18502
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Affects Versions: 1.10.1, 1.11.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: documentation, pull-request-available
>
> The directory '*flink/docs/dev/table*'  is missing  the page 
> '*legacySourceSinks.zh.md*'.
> We need to create the  page '*legacySourceSinks.zh.md*'  according to 
> '*flink/docs/dev/table/legacySourceSinks.md*'  and  add the page  into the 
> directory '*flink/docs/dev/table*' .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12855: [FLINK-18526][python][docs] Add configuration of Python UDF to use Ma…

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12855:
URL: https://github.com/apache/flink/pull/12855#issuecomment-655500427


   
   ## CI report:
   
   * cbd3259908d23c419e97595704e59af2ce970cab Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4345)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12854: [FLINK-18502][docs] Add the page 'legacySourceSinks.zh.md' into the directory 'docs/dev/table', which contains [FLINK-18505][docs] Co

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12854:
URL: https://github.com/apache/flink/pull/12854#issuecomment-655500315


   
   ## CI report:
   
   * a343769ca9bec4a73587c55daa0e265f8a3a6ce6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4344)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12798: [FLINK-16087][docs-zh] Translate "Detecting Patterns" page of "Streaming Concepts" into Chinese

2020-07-08 Thread GitBox


flinkbot edited a comment on pull request #12798:
URL: https://github.com/apache/flink/pull/12798#issuecomment-652303680


   
   ## CI report:
   
   * ad5d5dcac846080e2e40255505dd35896a6c7a94 UNKNOWN
   * f1fb24d628f66f3f2929fe4e737a835e8391c0a8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4343)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18502) Add the page 'legacySourceSinks.zh.md' into the directory 'docs/dev/table'

2020-07-08 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153605#comment-17153605
 ] 

Roc Marshal commented on FLINK-18502:
-

[~jark] 
OK. There's  nothing better!

> Add the page 'legacySourceSinks.zh.md'  into the directory 'docs/dev/table' 
> 
>
> Key: FLINK-18502
> URL: https://issues.apache.org/jira/browse/FLINK-18502
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Affects Versions: 1.10.1, 1.11.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: documentation, pull-request-available
>
> The directory '*flink/docs/dev/table*'  is missing  the page 
> '*legacySourceSinks.zh.md*'.
> We need to create the  page '*legacySourceSinks.zh.md*'  according to 
> '*flink/docs/dev/table/legacySourceSinks.md*'  and  add the page  into the 
> directory '*flink/docs/dev/table*' .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] twalthr commented on pull request #12853: [FLINK-18524][table-common] Fix type inference for Scala varargs

2020-07-08 Thread GitBox


twalthr commented on pull request #12853:
URL: https://github.com/apache/flink/pull/12853#issuecomment-655515610


   Thanks @aljoscha. I will merge this once the build is green.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18530) ParquetAvroWriters can not write data to hdfs

2020-07-08 Thread humengyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

humengyu updated FLINK-18530:
-
Description: 
I read data from kafka and write to hdfs by StreamingFileSink:
 # in version 1.11.0, ParquetAvroWriters does not work, but it works well in 
version 1.10.1;
 #  AvroWriters works well in 1.11.0.

{code:java}
public class TestParquetAvroSink {

  @Test
  public void testParquet() throws Exception {
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner()
.inStreamingMode().build();
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
env.enableCheckpointing(2L);

TableSchema tableSchema = TableSchema.builder().fields(
new String[]{"id", "name", "sex"},
new DataType[]{DataTypes.STRING(), DataTypes.STRING(), 
DataTypes.STRING()})
.build();

// build a kafka source
DataStream rowDataStream = ;

Schema schema = SchemaBuilder
.record("xxx")
.namespace("")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();

OutputFileConfig config = OutputFileConfig
.builder()
.withPartPrefix("prefix")
.withPartSuffix(".ext")
.build();

StreamingFileSink sink = StreamingFileSink
.forBulkFormat(
new Path("hdfs://host:port/xxx/xxx/xxx"),
ParquetAvroWriters.forGenericRecord(schema))
.withOutputFileConfig(config)
.withBucketAssigner(new DateTimeBucketAssigner<>("'pdate='-MM-dd"))
.build();

SingleOutputStreamOperator recordDateStream = rowDataStream
.map(new RecordMapFunction());

recordDateStream.print();
recordDateStream.addSink(sink);

env.execute("test");

  }


  @Test
  public void testAvro() throws Exception {
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner()
.inStreamingMode().build();
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
env.enableCheckpointing(2L);

TableSchema tableSchema = TableSchema.builder().fields(
new String[]{"id", "name", "sex"},
new DataType[]{DataTypes.STRING(), DataTypes.STRING(), 
DataTypes.STRING()})
.build();

// build a kafka source
DataStream rowDataStream = ;

Schema schema = SchemaBuilder
.record("xxx")
.namespace("")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();

OutputFileConfig config = OutputFileConfig
.builder()
.withPartPrefix("prefix")
.withPartSuffix(".ext")
.build();

StreamingFileSink sink = StreamingFileSink
.forBulkFormat(
new Path("hdfs://host:port/xxx/xxx/xxx"),
AvroWriters.forGenericRecord(schema))
.withOutputFileConfig(config)
.withBucketAssigner(new DateTimeBucketAssigner<>("'pdate='-MM-dd"))
.build();

SingleOutputStreamOperator recordDateStream = rowDataStream
.map(new RecordMapFunction());

recordDateStream.print();
recordDateStream.addSink(sink);

env.execute("test");

  }

  public static class RecordMapFunction implements MapFunction {

private transient Schema schema;

@Override
public GenericRecord map(Row row) throws Exception {
  if (schema == null) {
schema = SchemaBuilder
.record("xxx")
.namespace("xxx")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();
  }
  Record record = new Record(schema);
  record.put("id", row.getField(0));
  record.put("name", row.getField(1));
  record.put("sex", row.getField(2));
  return record;
}
  }
} 
{code}
 

  was:
I read data from kafka and write to hdfs by StreamingFileSink:
 # in version 1.11.0, ParquetAvroWriters does not work, but it works well in 
version 1.10.1;
 #  AvroWriters works well in 1.11.0.

{code:java}
 {code}
 


> ParquetAvroWriters can not write data to hdfs
> -
>
> Key: FLINK-18530
> URL: https://issues.apache.org/jira/browse/FLINK-18530
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: humengyu
>Priority: Major
>
> I read data from kafka and write to hdfs by StreamingFileSink:
>  # in version 1.11.0, ParquetAvroWriters does not work, but it works well in 
> version 

[jira] [Updated] (FLINK-18530) ParquetAvroWriters can not write data to hdfs

2020-07-08 Thread humengyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

humengyu updated FLINK-18530:
-
Description: 
I read data from kafka and write to hdfs by StreamingFileSink:
 # in version 1.11.0, ParquetAvroWriters does not work, but it works well in 
version 1.10.1;
 #  AvroWriters works well in 1.11.0.

{code:java}
 {code}
 

  was:
I read data from kafka and write to hdfs by StreamingFileSink:
 # in version 1.11.0, ParquetAvroWriters does not work, but it works well in 
version 1.10.1;
 #  AvroWriters works well in 1.11.0.

{code:java}
public class TestParquetAvroSink {  @Test
  public void testParquet() throws Exception {
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner()
.inStreamingMode().build();
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
env.enableCheckpointing(2L);TableSchema tableSchema = 
TableSchema.builder().fields(
new String[]{"id", "name", "sex"},
new DataType[]{DataTypes.STRING(), DataTypes.STRING(), 
DataTypes.STRING()})
.build();// build a kafka source
DataStream rowDataStream = ;Schema schema = SchemaBuilder
.record("xxx")
.namespace("")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();OutputFileConfig config = OutputFileConfig
.builder()
.withPartPrefix("prefix")
.withPartSuffix(".ext")
.build();StreamingFileSink sink = StreamingFileSink
.forBulkFormat(
new Path("hdfs://host:port/xxx/xxx/xxx"),
ParquetAvroWriters.forGenericRecord(schema))
.withOutputFileConfig(config)
.withBucketAssigner(new DateTimeBucketAssigner<>("'pdate='-MM-dd"))
.build();SingleOutputStreamOperator recordDateStream 
= rowDataStream
.map(new RecordMapFunction());recordDateStream.print();
recordDateStream.addSink(sink);env.execute("test");  }
  @Test
  public void testAvro() throws Exception {
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner()
.inStreamingMode().build();
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
env.enableCheckpointing(2L);TableSchema tableSchema = 
TableSchema.builder().fields(
new String[]{"id", "name", "sex"},
new DataType[]{DataTypes.STRING(), DataTypes.STRING(), 
DataTypes.STRING()})
.build();// build a kafka source
DataStream rowDataStream = ;Schema schema = SchemaBuilder
.record("xxx")
.namespace("")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();OutputFileConfig config = OutputFileConfig
.builder()
.withPartPrefix("prefix")
.withPartSuffix(".ext")
.build();StreamingFileSink sink = StreamingFileSink
.forBulkFormat(
new Path("hdfs://host:port/xxx/xxx/xxx"),
AvroWriters.forGenericRecord(schema))
.withOutputFileConfig(config)
.withBucketAssigner(new DateTimeBucketAssigner<>("'pdate='-MM-dd"))
.build();SingleOutputStreamOperator recordDateStream 
= rowDataStream
.map(new RecordMapFunction());recordDateStream.print();
recordDateStream.addSink(sink);env.execute("test");  }  public static 
class RecordMapFunction implements MapFunction {private 
transient Schema schema;@Override
public GenericRecord map(Row row) throws Exception {
  if (schema == null) {
schema = SchemaBuilder
.record("xxx")
.namespace("xxx")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();
  }
  Record record = new Record(schema);
  record.put("id", row.getField(0));
  record.put("name", row.getField(1));
  record.put("sex", row.getField(2));
  return record;
}
  }
}
{code}


> ParquetAvroWriters can not write data to hdfs
> -
>
> Key: FLINK-18530
> URL: https://issues.apache.org/jira/browse/FLINK-18530
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: humengyu
>Priority: Major
>
> I read data from kafka and write to hdfs by StreamingFileSink:
>  # in version 1.11.0, ParquetAvroWriters does not work, but it works well in 
> version 1.10.1;
>  #  AvroWriters works well in 

[jira] [Updated] (FLINK-18529) Query Hive table and filter by timestamp partition can fail

2020-07-08 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-18529:
---
Description: 
The following example
{code}
create table foo (x int) partitioned by (ts timestamp);
select x from foo where timestamp '2020-07-08 13:08:14' = ts;
{code}
fails with
{noformat}
CatalogException: HiveCatalog currently only supports timestamp of precision 9
{noformat}

> Query Hive table and filter by timestamp partition can fail
> ---
>
> Key: FLINK-18529
> URL: https://issues.apache.org/jira/browse/FLINK-18529
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>
> The following example
> {code}
> create table foo (x int) partitioned by (ts timestamp);
> select x from foo where timestamp '2020-07-08 13:08:14' = ts;
> {code}
> fails with
> {noformat}
> CatalogException: HiveCatalog currently only supports timestamp of precision 9
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18530) ParquetAvroWriters can not write data to hdfs

2020-07-08 Thread humengyu (Jira)
humengyu created FLINK-18530:


 Summary: ParquetAvroWriters can not write data to hdfs
 Key: FLINK-18530
 URL: https://issues.apache.org/jira/browse/FLINK-18530
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Affects Versions: 1.11.0
Reporter: humengyu


I read data from kafka and write to hdfs by StreamingFileSink:
 # in version 1.11.0, ParquetAvroWriters does not work, but it works well in 
version 1.10.1;
 #  AvroWriters works well in 1.11.0.

{code:java}
public class TestParquetAvroSink {  @Test
  public void testParquet() throws Exception {
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner()
.inStreamingMode().build();
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
env.enableCheckpointing(2L);TableSchema tableSchema = 
TableSchema.builder().fields(
new String[]{"id", "name", "sex"},
new DataType[]{DataTypes.STRING(), DataTypes.STRING(), 
DataTypes.STRING()})
.build();// build a kafka source
DataStream rowDataStream = ;Schema schema = SchemaBuilder
.record("xxx")
.namespace("")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();OutputFileConfig config = OutputFileConfig
.builder()
.withPartPrefix("prefix")
.withPartSuffix(".ext")
.build();StreamingFileSink sink = StreamingFileSink
.forBulkFormat(
new Path("hdfs://host:port/xxx/xxx/xxx"),
ParquetAvroWriters.forGenericRecord(schema))
.withOutputFileConfig(config)
.withBucketAssigner(new DateTimeBucketAssigner<>("'pdate='-MM-dd"))
.build();SingleOutputStreamOperator recordDateStream 
= rowDataStream
.map(new RecordMapFunction());recordDateStream.print();
recordDateStream.addSink(sink);env.execute("test");  }
  @Test
  public void testAvro() throws Exception {
EnvironmentSettings settings = 
EnvironmentSettings.newInstance().useBlinkPlanner()
.inStreamingMode().build();
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, settings);
env.enableCheckpointing(2L);TableSchema tableSchema = 
TableSchema.builder().fields(
new String[]{"id", "name", "sex"},
new DataType[]{DataTypes.STRING(), DataTypes.STRING(), 
DataTypes.STRING()})
.build();// build a kafka source
DataStream rowDataStream = ;Schema schema = SchemaBuilder
.record("xxx")
.namespace("")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();OutputFileConfig config = OutputFileConfig
.builder()
.withPartPrefix("prefix")
.withPartSuffix(".ext")
.build();StreamingFileSink sink = StreamingFileSink
.forBulkFormat(
new Path("hdfs://host:port/xxx/xxx/xxx"),
AvroWriters.forGenericRecord(schema))
.withOutputFileConfig(config)
.withBucketAssigner(new DateTimeBucketAssigner<>("'pdate='-MM-dd"))
.build();SingleOutputStreamOperator recordDateStream 
= rowDataStream
.map(new RecordMapFunction());recordDateStream.print();
recordDateStream.addSink(sink);env.execute("test");  }  public static 
class RecordMapFunction implements MapFunction {private 
transient Schema schema;@Override
public GenericRecord map(Row row) throws Exception {
  if (schema == null) {
schema = SchemaBuilder
.record("xxx")
.namespace("xxx")
.fields()
.optionalString("id")
.optionalString("name")
.optionalString("sex")
.endRecord();
  }
  Record record = new Record(schema);
  record.put("id", row.getField(0));
  record.put("name", row.getField(1));
  record.put("sex", row.getField(2));
  return record;
}
  }
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18502) Add the page 'legacySourceSinks.zh.md' into the directory 'docs/dev/table'

2020-07-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153589#comment-17153589
 ] 

Jark Wu edited comment on FLINK-18502 at 7/8/20, 1:11 PM:
--

It's fine to fix them in one PR. 


was (Author: jark):
It's fine to fix them in the single PR. 

> Add the page 'legacySourceSinks.zh.md'  into the directory 'docs/dev/table' 
> 
>
> Key: FLINK-18502
> URL: https://issues.apache.org/jira/browse/FLINK-18502
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Affects Versions: 1.10.1, 1.11.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: documentation, pull-request-available
>
> The directory '*flink/docs/dev/table*'  is missing  the page 
> '*legacySourceSinks.zh.md*'.
> We need to create the  page '*legacySourceSinks.zh.md*'  according to 
> '*flink/docs/dev/table/legacySourceSinks.md*'  and  add the page  into the 
> directory '*flink/docs/dev/table*' .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18502) Add the page 'legacySourceSinks.zh.md' into the directory 'docs/dev/table'

2020-07-08 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153589#comment-17153589
 ] 

Jark Wu commented on FLINK-18502:
-

It's fine to fix them in the single PR. 

> Add the page 'legacySourceSinks.zh.md'  into the directory 'docs/dev/table' 
> 
>
> Key: FLINK-18502
> URL: https://issues.apache.org/jira/browse/FLINK-18502
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Affects Versions: 1.10.1, 1.11.0
>Reporter: Roc Marshal
>Assignee: Roc Marshal
>Priority: Major
>  Labels: documentation, pull-request-available
>
> The directory '*flink/docs/dev/table*'  is missing  the page 
> '*legacySourceSinks.zh.md*'.
> We need to create the  page '*legacySourceSinks.zh.md*'  according to 
> '*flink/docs/dev/table/legacySourceSinks.md*'  and  add the page  into the 
> directory '*flink/docs/dev/table*' .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal commented on pull request #12798: [FLINK-16087][docs-zh] Translate "Detecting Patterns" page of "Streaming Concepts" into Chinese

2020-07-08 Thread GitBox


RocMarshal commented on pull request #12798:
URL: https://github.com/apache/flink/pull/12798#issuecomment-655508679


   @klion26 
   
   Thank you so much for your efforts in the `review` of this PR whose workload 
 is  havey !
   It's very significant for the translation of this page. And I made some 
changes in the page based on your suggestions .
   
   Thank you again for your help !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18529) Query Hive table and filter by timestamp partition can fail

2020-07-08 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-18529:
---
Summary: Query Hive table and filter by timestamp partition can fail  (was: 
Query Hive table and filter by timestamp partition doesn't work)

> Query Hive table and filter by timestamp partition can fail
> ---
>
> Key: FLINK-18529
> URL: https://issues.apache.org/jira/browse/FLINK-18529
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-playgrounds] alpinegizmo commented on pull request #14: [hotfix][walkthrough] Update Table API Walkthrough

2020-07-08 Thread GitBox


alpinegizmo commented on pull request #14:
URL: https://github.com/apache/flink-playgrounds/pull/14#issuecomment-655501681


   @sjwiesman I didn't notice this PR (I searched for relevant JIRA tickets, 
but didn't think to look here before doing the update). This PR wouldn't work 
anyway, things changed in 1.11.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >