[GitHub] [flink] leozhangsr commented on a diff in pull request #20234: [FLINK-28475] [Connector/kafka] Stopping offset can be 0

2022-07-15 Thread GitBox


leozhangsr commented on code in PR #20234:
URL: https://github.com/apache/flink/pull/20234#discussion_r922636336


##
flink-connectors/flink-connector-kafka/src/test/java/org/apache/flink/connector/kafka/source/split/KafkaPartitionSplitSerializerTest.java:
##
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kafka.source.split;
+
+import org.apache.kafka.common.TopicPartition;
+import org.assertj.core.util.Lists;
+import org.junit.jupiter.api.Test;
+
+import java.io.IOException;
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** Tests for {@link KafkaPartitionSplitSerializer}. */
+public class KafkaPartitionSplitSerializerTest {

Review Comment:
   yes, that's what we want: if the stopping offset of a split is set to 0, no 
message from that split will be consumed.
   I check this code again, and make a explanation for my change and test case.
   As we know, split is defined by the driver, then is serialized and send to 
task manager, then handled by KafkaPartitionSplitReader.The split reader 
cosumes messages, and stops at the stopping offset if the stopping offset is 
set.
   
   To achieve this, following key steps have to be validated:
   1、split is correctly serialized and send to split reader.
   2、split reader parse the split correctly
   3、split reader consumes and stop at the stopping offset.
   
   Step 1 is validated by test case I had make.
   Step 2 and 3 , can be validated by 
KafkaPartitionSplitReaderTest.testHandleSplitChangesAndFetch-assignSplitsAndFetchUntilFinish.
 This test make sure the split reader stop at the stopping offset.This test 
case set the stopping offset to 10(NUM_RECORDS_PER_PARTITION).Though the 
stopping offset is not 0, it still can work well if the split is parse 
correctly. KafkaPartitionSplitReader.parseStoppingOffsets acquires 
stoppingOffset to be >= 0,LATEST_OFFSET,COMMITTED_OFFSET. The 
KafkaPartitionSplit.getStoppingOffset reachs the same conditions after my 
modification.
   Step 2 and 3, is also validated by 
KafkaPartitionSplitReaderTest.testAssignEmptySplit for empty split 
situation.Generally when the stopping offset is 0, the starting offset might by 
0 too, which means it's a empty split,it should consume nothing  and stop.In 
this test case, the empty split' starting offset is LATEST_OFFSET, stopping 
offset is LATEST_OFFSET.
   
   I only add a test case for step 1, since it's nerver tested before.I think 
the existing tests can already covered the situation(if the stopping offset of 
a split is set to 0, no message from that split will be consumed).Do you agree? 
Should I add a test case just for stopping offset = 0? Would it be a little 
redundance?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] godfreyhe commented on a diff in pull request #20243: [FLINK-28491][table-planner] Introduce APPROX_COUNT_DISTINCT aggregate function for batch sql

2022-07-15 Thread GitBox


godfreyhe commented on code in PR #20243:
URL: https://github.com/apache/flink/pull/20243#discussion_r922623564


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/aggfunctions/BatchApproxCountDistinctAggFunctions.java:
##
@@ -0,0 +1,334 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.aggfunctions;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import 
org.apache.flink.table.runtime.functions.aggregate.BuiltInAggregateFunction;
+import 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.HllBuffer;
+import 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.HyperLogLogPlusPlus;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.DateType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.FloatType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LocalZonedTimestampType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.SmallIntType;
+import org.apache.flink.table.types.logical.TimeType;
+import org.apache.flink.table.types.logical.TimestampType;
+import org.apache.flink.table.types.logical.TinyIntType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import java.util.Collections;
+import java.util.List;
+
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.DEFAULT_SEED;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashInt;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashLong;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashUnsafeBytes;
+import static 
org.apache.flink.table.types.utils.DataTypeUtils.toInternalDataType;
+
+/** Built-in APPROX_COUNT_DISTINCT aggregate function for Batch sql. */
+public class BatchApproxCountDistinctAggFunctions {
+
+/** Base function for APPROX_COUNT_DISTINCT aggregate. */
+public abstract static class ApproxCountDistinctAggFunction
+extends BuiltInAggregateFunction {
+
+private static final Double RELATIVE_SD = 0.01;
+private transient HyperLogLogPlusPlus hll;
+
+private final transient DataType valueDataType;
+
+public ApproxCountDistinctAggFunction(LogicalType valueType) {
+this.valueDataType = toInternalDataType(valueType);
+}
+
+@Override
+public HllBuffer createAccumulator() {
+hll = new HyperLogLogPlusPlus(RELATIVE_SD);
+HllBuffer buffer = new HllBuffer();
+buffer.array = new long[hll.getNumWords()];
+resetAccumulator(buffer);
+return buffer;
+}
+
+public void accumulate(HllBuffer buffer, Object input) throws 
Exception {
+if (input != null) {
+hll.updateByHashcode(buffer, getHashcode((T) input));
+}
+}
+
+abstract long getHashcode(T t);

Review Comment:
   for performance, this approach can avoid many switch case for each record



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] godfreyhe commented on a diff in pull request #20243: [FLINK-28491][table-planner] Introduce APPROX_COUNT_DISTINCT aggregate function for batch sql

2022-07-15 Thread GitBox


godfreyhe commented on code in PR #20243:
URL: https://github.com/apache/flink/pull/20243#discussion_r922623503


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/aggfunctions/BatchApproxCountDistinctAggFunctions.java:
##
@@ -0,0 +1,334 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.aggfunctions;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import 
org.apache.flink.table.runtime.functions.aggregate.BuiltInAggregateFunction;
+import 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.HllBuffer;
+import 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.HyperLogLogPlusPlus;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.DateType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.FloatType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LocalZonedTimestampType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.SmallIntType;
+import org.apache.flink.table.types.logical.TimeType;
+import org.apache.flink.table.types.logical.TimestampType;
+import org.apache.flink.table.types.logical.TinyIntType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import java.util.Collections;
+import java.util.List;
+
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.DEFAULT_SEED;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashInt;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashLong;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashUnsafeBytes;
+import static 
org.apache.flink.table.types.utils.DataTypeUtils.toInternalDataType;
+
+/** Built-in APPROX_COUNT_DISTINCT aggregate function for Batch sql. */
+public class BatchApproxCountDistinctAggFunctions {

Review Comment:
   I had add some test cases in SortAggITCase



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] godfreyhe commented on a diff in pull request #20243: [FLINK-28491][table-planner] Introduce APPROX_COUNT_DISTINCT aggregate function for batch sql

2022-07-15 Thread GitBox


godfreyhe commented on code in PR #20243:
URL: https://github.com/apache/flink/pull/20243#discussion_r922623326


##
flink-table/flink-table-runtime/src/test/java/org/apache/flink/table/runtime/functions/aggregate/hyperloglog/HyperLogLogPlusPlusTest.java:
##
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.functions.aggregate.hyperloglog;
+
+import org.junit.Test;
+
+import java.util.HashSet;
+import java.util.Random;
+import java.util.Set;
+import java.util.function.Function;
+
+import static org.apache.flink.core.testutils.FlinkAssertions.anyCauseMatches;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+/** The test of HyperLogLogPlusPlus is inspired from Apache Spark. */
+public class HyperLogLogPlusPlusTest {
+
+@Test
+public void testInvalidRelativeSD() {
+assertThatThrownBy(() -> new HyperLogLogPlusPlus(0.4))

Review Comment:
   Both methods are widely used in flink



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] godfreyhe commented on a diff in pull request #20243: [FLINK-28491][table-planner] Introduce APPROX_COUNT_DISTINCT aggregate function for batch sql

2022-07-15 Thread GitBox


godfreyhe commented on code in PR #20243:
URL: https://github.com/apache/flink/pull/20243#discussion_r922623138


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/aggfunctions/BatchApproxCountDistinctAggFunctions.java:
##
@@ -0,0 +1,334 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.functions.aggfunctions;
+
+import org.apache.flink.core.memory.MemorySegment;
+import org.apache.flink.core.memory.MemorySegmentFactory;
+import org.apache.flink.table.api.DataTypes;
+import org.apache.flink.table.data.DecimalData;
+import org.apache.flink.table.data.TimestampData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import 
org.apache.flink.table.runtime.functions.aggregate.BuiltInAggregateFunction;
+import 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.HllBuffer;
+import 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.HyperLogLogPlusPlus;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.logical.BigIntType;
+import org.apache.flink.table.types.logical.DateType;
+import org.apache.flink.table.types.logical.DecimalType;
+import org.apache.flink.table.types.logical.DoubleType;
+import org.apache.flink.table.types.logical.FloatType;
+import org.apache.flink.table.types.logical.IntType;
+import org.apache.flink.table.types.logical.LocalZonedTimestampType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.SmallIntType;
+import org.apache.flink.table.types.logical.TimeType;
+import org.apache.flink.table.types.logical.TimestampType;
+import org.apache.flink.table.types.logical.TinyIntType;
+import org.apache.flink.table.types.logical.VarCharType;
+
+import java.util.Collections;
+import java.util.List;
+
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.DEFAULT_SEED;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashInt;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashLong;
+import static 
org.apache.flink.table.runtime.functions.aggregate.hyperloglog.XxHash64Function.hashUnsafeBytes;
+import static 
org.apache.flink.table.types.utils.DataTypeUtils.toInternalDataType;
+
+/** Built-in APPROX_COUNT_DISTINCT aggregate function for Batch sql. */
+public class BatchApproxCountDistinctAggFunctions {

Review Comment:
   good catch



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922617929


##
flink-formats/flink-sql-protobuf/pom.xml:
##
@@ -0,0 +1,86 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+   
+   4.0.0
+
+   
+   org.apache.flink
+   flink-formats
+   1.16-SNAPSHOT
+   
+
+   flink-sql-protobuf
+   Flink : Formats : SQL Protobuf
+
+   jar
+
+   
+   
+   org.apache.flink
+   flink-parquet

Review Comment:
   I have tested the bundled jar in a standalone project and the code can be 
checked out from https://github.com/maosuhan/flink-sql-protobuf-test/tree/main. 
I successfully tested in flink 1.14 version but not 1.15 because I cannot run a 
simple demo in 1.15 no matter I use protobuf or json. Do you think it is a 
problem?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-27878) [FLIP-232] Add Retry Support For Async I/O In DataStream API

2022-07-15 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-27878.
---
Resolution: Fixed

> [FLIP-232] Add Retry Support For Async I/O In DataStream API
> 
>
> Key: FLINK-27878
> URL: https://issues.apache.org/jira/browse/FLINK-27878
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> FLIP-232: Add Retry Support For Async I/O In DataStream API
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883963



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-27878) [FLIP-232] Add Retry Support For Async I/O In DataStream API

2022-07-15 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567427#comment-17567427
 ] 

Yun Gao commented on FLINK-27878:
-

Merged on master via  e85cf8c4cdf417b47f8d53bf3bb202f79e92b205.

> [FLIP-232] Add Retry Support For Async I/O In DataStream API
> 
>
> Key: FLINK-27878
> URL: https://issues.apache.org/jira/browse/FLINK-27878
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.16.0
>
>
> FLIP-232: Add Retry Support For Async I/O In DataStream API
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=211883963



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] gaoyunhaii closed pull request #19983: [FLINK-27878][datastream] Add Retry Support For Async I/O In DataStream API

2022-07-15 Thread GitBox


gaoyunhaii closed pull request #19983: [FLINK-27878][datastream] Add Retry 
Support For Async I/O In DataStream API
URL: https://github.com/apache/flink/pull/19983


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] sunshineJK commented on pull request #20127: [FLINK-26270] Flink SQL write data to kafka by CSV format , whether d…

2022-07-15 Thread GitBox


sunshineJK commented on PR #20127:
URL: https://github.com/apache/flink/pull/20127#issuecomment-1186037818

   > 
   Thank you , I'm going to modify it a little bit.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-27285) CassandraConnectorITCase failed on azure due to NoHostAvailableException

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-27285:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> CassandraConnectorITCase failed on azure due to NoHostAvailableException
> 
>
> Key: FLINK-27285
> URL: https://issues.apache.org/jira/browse/FLINK-27285
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.16.0
>Reporter: Yun Gao
>Priority: Major
>  Labels: stale-major, test-stability
>
> {code:java}
> 2022-04-17T06:24:40.1216092Z Apr 17 06:24:40 [ERROR] 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase  
> Time elapsed: 29.81 s  <<< ERROR!
> 2022-04-17T06:24:40.1218517Z Apr 17 06:24:40 
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: /172.17.0.1:53053 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: 
> [/172.17.0.1] Timed out waiting for server response))
> 2022-04-17T06:24:40.1220821Z Apr 17 06:24:40  at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
> 2022-04-17T06:24:40.1222816Z Apr 17 06:24:40  at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
> 2022-04-17T06:24:40.1224696Z Apr 17 06:24:40  at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
> 2022-04-17T06:24:40.1226624Z Apr 17 06:24:40  at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
> 2022-04-17T06:24:40.1228346Z Apr 17 06:24:40  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
> 2022-04-17T06:24:40.1229839Z Apr 17 06:24:40  at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
> 2022-04-17T06:24:40.1231736Z Apr 17 06:24:40  at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.startAndInitializeCassandra(CassandraConnectorITCase.java:385)
> 2022-04-17T06:24:40.1233614Z Apr 17 06:24:40  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-04-17T06:24:40.1234992Z Apr 17 06:24:40  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-04-17T06:24:40.1236194Z Apr 17 06:24:40  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-04-17T06:24:40.1237598Z Apr 17 06:24:40  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-04-17T06:24:40.1238768Z Apr 17 06:24:40  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> 2022-04-17T06:24:40.1240056Z Apr 17 06:24:40  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-04-17T06:24:40.1242109Z Apr 17 06:24:40  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> 2022-04-17T06:24:40.1243493Z Apr 17 06:24:40  at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> 2022-04-17T06:24:40.1244903Z Apr 17 06:24:40  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> 2022-04-17T06:24:40.1246352Z Apr 17 06:24:40  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-04-17T06:24:40.1247809Z Apr 17 06:24:40  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> 2022-04-17T06:24:40.1249193Z Apr 17 06:24:40  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> 2022-04-17T06:24:40.1250395Z Apr 17 06:24:40  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-04-17T06:24:40.1251468Z Apr 17 06:24:40  at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> 2022-04-17T06:24:40.1252601Z Apr 17 06:24:40  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> 2022-04-17T06:24:40.1253640Z Apr 17 06:24:40  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> 2022-04-17T06:24:40.1254768Z Apr 17 06:24:40  at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> 2022-04-17T06:24:40.1256077Z Apr 17 06:24:40  at 
> 

[jira] [Updated] (FLINK-14527) Add integration tests for PostgreSQL and MySQL dialects in Flink JDBC module

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14527:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add integration tests for PostgreSQL and MySQL dialects in Flink JDBC module
> 
>
> Key: FLINK-14527
> URL: https://issues.apache.org/jira/browse/FLINK-14527
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Currently, we already supported PostgreSQL and MySQL and Derby dialects in 
> flink-jdbc as sink and source. However, we only have integeration tests for 
> Derby. 
> We should add integeration tests for PostgreSQL and MySQL dialects too. Maybe 
> we can use JUnit {{Parameterized}} feature to avoid duplicated testing code.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27814) Add an abstraction layer for connectors to read and write row data instead of key-values

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-27814:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add an abstraction layer for connectors to read and write row data instead of 
> key-values
> 
>
> Key: FLINK-27814
> URL: https://issues.apache.org/jira/browse/FLINK-27814
> Project: Flink
>  Issue Type: Improvement
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> Currently {{FileStore}} exposes an interface for reading and writing 
> {{KeyValue}}. However connectors may have different ways to change a 
> {{RowData}} to {{KeyValue}} under different {{WriteMode}}. This results in 
> lots of {{if...else...}} branches and duplicated code.
> We need to add an abstraction layer for connectors to read and write row data 
> instead of key-values.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13503) Add contract in `LookupableTableSource` to specify the behavior when lookupKeys contains null

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13503:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add contract in `LookupableTableSource` to specify the behavior when 
> lookupKeys contains null
> -
>
> Key: FLINK-13503
> URL: https://issues.apache.org/jira/browse/FLINK-13503
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC, Table SQL / API
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Jing Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we should add contract in `LookupableTableSource` to specify expected 
> behavior when the lookupKeys contains null value.  
> For example, one input record of eval method is (null,1) which means to look 
> up data in (a,b) columns which key satisfy the requirement.  there are at 
> least three possibility here.
>   * to ignore null value, that is, in the above example, only looks `b = 1`
>   * to lookup `is value`, that is, in the above example, only looks `a is 
> null and b = 1`
>   * to return empty records, that is, in the above example, only looks `a = 
> null and b = 1`
> In fact, there are different behavior in current code. 
> For example, in Jdbc connector,
> The query template in `JdbcLookUpFunction` like:
> SELECT c, d, e, f from T where a = ? and b = ?
> If pass (null, 1) to `eval` method, it will generate the following query:
> SELECT c, d, e, f from T where a = null and b = 1
> Which always outputs empty records.
> BTW, Is this behavior reasonable?
> and the `InMemoryLookupableTableSource` behaviors like point 2 in the above 
> list.
> some private connector in Blink behaviors like point 1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13218) '*.count not supported in TableApi query

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13218:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> '*.count not supported in TableApi query
> 
>
> Key: FLINK-13218
> URL: https://issues.apache.org/jira/browse/FLINK-13218
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> The following query is not supported yet:
> {code:java}
> val t = StreamTestData.get3TupleDataStream(env).toTable(tEnv, 'a, 'b, 'c)
>   .groupBy('b)
>   .select('b, 'a.sum, '*.count)
> {code}
> The following exception will be thrown.
> {code:java}
> org.apache.flink.table.api.ValidationException: Cannot resolve field [*], 
> input field list:[a, b, c].
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.failForField(ReferenceResolverRule.java:80)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$4(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseThrow(Optional.java:290)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$null$5(ReferenceResolverRule.java:75)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.lambda$visit$6(ReferenceResolverRule.java:74)
>   at java.util.Optional.orElseGet(Optional.java:267)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:71)
>   at 
> org.apache.flink.table.expressions.resolver.rules.ReferenceResolverRule$ExpressionResolverVisitor.visit(ReferenceResolverRule.java:51)
>   at 
> ...
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22411) Checkpoint failed caused by Mkdirs failed to create file, the path for Flink state.checkpoints.dir in docker-compose can not work from Flink Operations Playground

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22411:
---
Labels: auto-deprioritized-major pull-request-available stale-minor  (was: 
auto-deprioritized-major pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Checkpoint failed caused by Mkdirs failed to create file, the path for Flink 
> state.checkpoints.dir in docker-compose can not work from Flink Operations 
> Playground
> --
>
> Key: FLINK-22411
> URL: https://issues.apache.org/jira/browse/FLINK-22411
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.12.2
>Reporter: Serge
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-minor
> Attachments: screenshot-1.png
>
>
> docker-compose starting correctly starting docker-compose but after several 
> minutes of work, Apache Flink has to create checkpoints, but there is some 
> problem with access to the file system. next step in [Observing Failure & 
> Recovery|https://ci.apache.org/projects/flink/flink-docs-release-1.12/try-flink/flink-operations-playground.html#observing-failure–recovery]
>  can not operation.
> Exception:
> {code:java}
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize 
> the pending checkpoint 104. Failure reason: Failure to finalize checkpoint.
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1216)
>  ~[flink-dist_2.11-1.12.1.jar:1.12.1]
> …..
> Caused by: org.apache.flink.util.SerializedThrowable: Mkdirs failed to create 
> file:/tmp/flink-checkpoints-directory/d73c2f87b0d7ea6748a1913ee4b50afe/chk-104
> at 
> org.apache.flink.core.fs.local.LocalFileSystem.create(LocalFileSystem.java:262)
>  ~[flink-dist_2.11-1.12.1.jar:1.12.1]
> {code}
> it is work , add a step:
> Create the checkpoint and savepoint directories on the Docker host machine 
> (these volumes are mounted by the jobmanager and taskmanager, as specified in 
> docker-compose.yaml):
> {code:bash}
> mkdir -p /tmp/flink-checkpoints-directory
> mkdir -p /tmp/flink-savepoints-directory
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13247) Implement external shuffle service for YARN

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13247:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Implement external shuffle service for YARN
> ---
>
> Key: FLINK-13247
> URL: https://issues.apache.org/jira/browse/FLINK-13247
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Network
>Reporter: MalcolmSanders
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Flink batch job users could achieve better cluster utilization and job 
> throughput throught external shuffle service because the producers of 
> intermedia result partitions can be released once intermedia result 
> partitions have been persisted on disks. In 
> [FLINK-10653|https://issues.apache.org/jira/browse/FLINK-10653], [~zjwang] 
> has introduced pluggable shuffle manager architecture which abstracts the 
> process of data transfer between stages from flink runtime as shuffle 
> service. I propose to YARN implementation for flink external shuffle service 
> since YARN is widely used in various companies.
> The basic idea is as follows:
> (1) Producers write intermedia result partitions to local disks assigned by 
> NodeManager;
> (2) Yarn shuffle servers, deployed on each NodeManager as an auxiliary 
> service, are acknowledged of intermedia result partition descriptions by 
> producers;
> (3) Consumers fetch intermedia result partition from yarn shuffle servers;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-27299) flink parsing parameter bug

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-27299:
---
Labels: easyfix pull-request-available stale-assigned  (was: easyfix 
pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> flink parsing parameter bug
> ---
>
> Key: FLINK-27299
> URL: https://issues.apache.org/jira/browse/FLINK-27299
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.11.6, 1.13.5, 1.14.2, 1.13.6, 1.14.3, 1.14.4
>Reporter: Huajie Wang
>Assignee: Huajie Wang
>Priority: Minor
>  Labels: easyfix, pull-request-available, stale-assigned
> Fix For: 1.16.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When I am running a flink job, I specify a running parameter with a "#" sign 
> in it. The parsing fails.
> e.g: flink run com.myJob --sink.password db@123#123 
> only parse the content in front of "#", after reading the source code It is 
> found that the parameters are intercepted according to "#" in the 
> loadYAMLResource method of GlobalConfiguration. This part needs to be improved



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-12130) Apply command line options to configuration before installing security modules

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12130:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Apply command line options to configuration before installing security modules
> --
>
> Key: FLINK-12130
> URL: https://issues.apache.org/jira/browse/FLINK-12130
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Reporter: jiasheng55
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>
> Currently if the user configures Kerberos credentials through command line, 
> it won't work.
> {code:java}
> // flink run -m yarn-cluster -yD 
> security.kerberos.login.keytab=/path/to/keytab -yD 
> security.kerberos.login.principal=xxx /path/to/test.jar
> {code}
> Above command would cause security failure if you do not have a ticket cache 
> w/ kinit.
> Maybe we could call 
> _org.apache.flink.client.cli.AbstractCustomCommandLine#applyCommandLineOptionsToConfiguration_
>   before _SecurityUtils.install(new 
> SecurityConfiguration(cli.configuration));_
> Here is a demo patch: 
> [https://github.com/jiasheng55/flink/commit/ef6880dba8a1f36849f5d1bb308405c421b29986]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-15832) Test jars are installed twice

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15832:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
build  (was: auto-deprioritized-major auto-unassigned build stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Test jars are installed twice
> -
>
> Key: FLINK-15832
> URL: https://issues.apache.org/jira/browse/FLINK-15832
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.9.1
>Reporter: static-max
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, build
>
> I built Flink 1.9.1 myself and merged the changes from 
> [https://github.com/apache/flink/pull/10936].
> When I uploaded the artifacts to our repository (using {{mvn deploy }}
> {{-DaltDeploymentRepository}}) the build fails as 
> {{flink-metrics-core-tests}} will be uploaded twice and we have redeployments 
> disabled.
>  
> I'm not sure if other artifacts are affected as well, as I enabled 
> redeployment as a quick workaround.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13213) MinIdleStateRetentionTime/MaxIdleStateRetentionTime in TableConfig will be removed after call toAppendStream/toRetractStream without QueryConfig parameters

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13213:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> MinIdleStateRetentionTime/MaxIdleStateRetentionTime in TableConfig will be 
> removed after call toAppendStream/toRetractStream without QueryConfig 
> parameters
> ---
>
> Key: FLINK-13213
> URL: https://issues.apache.org/jira/browse/FLINK-13213
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are two `toAppendStream` method in `StreamTableEnvironment`:
> 1.  def toAppendStream[T: TypeInformation](table: Table): DataStream[T]
> 2.   def toAppendStream[T: TypeInformation](table: Table, queryConfig: 
> StreamQueryConfig): DataStream[T]
> After convert `Table` to `DataStream` by call the first method or 
> toRetractStream, the MinIdleStateRetentionTime/MaxIdleStateRetentionTime in 
> TableConfig will be removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13301) Some PlannerExpression resultType is not consistent with Calcite Type inference

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13301:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference
> ---
>
> Key: FLINK-13301
> URL: https://issues.apache.org/jira/browse/FLINK-13301
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Some PlannerExpression resultType is not consistent with Calcite Type 
> inference. The problem could be happened when run  the following example: 
> {code:java}
> // prepare source Data
> val testData = new mutable.MutableList[(Int)]
> testData.+=((3))
> val t = env.fromCollection(testData).toTable(tEnv).as('a)
> // register a TableSink
> val fieldNames = Array("f0")
> val fieldTypes: Array[TypeInformation[_]] = Array(Types.INT())
> //val fieldTypes: Array[TypeInformation[_]] = Array(Types.LONG())
> val sink = new MemoryTableSourceSinkUtil.UnsafeMemoryAppendTableSink
> tEnv.registerTableSink("targetTable", sink.configure(fieldNames, 
> fieldTypes))
> 
> t.select('a.floor()).insertInto("targetTable")
> env.execute()
> {code}
> The cause is ResultType of `floor` is LONG_TYPE_INFO, while in Calcite 
> `SqlFloorFunction` infers returnType is the type of the first argument(INT in 
> the above case).
> If I change `fieldTypes` to Array(Types.INT()), the following error will be 
> thrown in compile phase.
> {code:java}
> org.apache.flink.table.api.ValidationException: Field types of query result 
> and registered TableSink [targetTable] do not match.
> Query result schema: [_c0: Long]
> TableSink schema:[f0: Integer]
>   at 
> org.apache.flink.table.sinks.TableSinkUtils$.validateSink(TableSinkUtils.scala:59)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:158)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$2.apply(StreamPlanner.scala:157)
>   at scala.Option.map(Option.scala:146)
>   at 
> org.apache.flink.table.planner.StreamPlanner.org$apache$flink$table$planner$StreamPlanner$$translate(StreamPlanner.scala:157)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> org.apache.flink.table.planner.StreamPlanner$$anonfun$translate$1.apply(StreamPlanner.scala:129)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> {code}
> And If I change `fieldTypes` to Array(Types.LONG()), the other error will be 
> thrown in runtime.
> {code:java}
> org.apache.flink.table.api.TableException: Result field does not match 
> requested type. Requested: Long; Actual: Integer
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:103)
>   at 
> org.apache.flink.table.planner.Conversions$$anonfun$generateRowConverterFunction$2.apply(Conversions.scala:98)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>   at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>   at 
> org.apache.flink.table.planner.Conversions$.generateRowConverterFunction(Conversions.scala:98)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.getConversionMapper(DataStreamConversions.scala:135)
>   at 
> org.apache.flink.table.planner.DataStreamConversions$.convert(DataStreamConversions.scala:91)
> {code}
> {color:red}Above inconsistent problem also exists in `Floor`, `Ceil`, `Mod` 
> and so on.  {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-14875) Blink planner should pass the right ExecutionConfig to the creation of serializer

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14875:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Blink planner should pass the right ExecutionConfig to the creation of 
> serializer
> -
>
> Key: FLINK-14875
> URL: https://issues.apache.org/jira/browse/FLINK-14875
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Reporter: Jing Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
> Attachments: ImmutableCollectionKryoDeserializerITCase.java
>
>
> If source contains data which has immutable collection, the exception will be 
> thrown out:
> {code:java}
> Caused by: com.esotericsoftware.kryo.KryoException: 
> java.lang.UnsupportedOperationException
> Serialization trace:
> logTags_ (com.aliyun.openservices.log.common.Logs$LogGroup)
> mLogGroup (com.aliyun.openservices.log.common.LogGroupData)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:730)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:113)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:315)
>   at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.deserialize(ListSerializer.java:138)
>   at 
> org.apache.flink.api.common.typeutils.base.ListSerializer.deserialize(ListSerializer.java:47)
>   at 
> org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:463)
>   at 
> org.apache.flink.table.dataformat.BinaryRow.getGeneric(BinaryRow.java:440)
>   at BaseRowSerializerProjection$52.apply(Unknown Source)
>   at BaseRowSerializerProjection$52.apply(Unknown Source)
>   at 
> org.apache.flink.table.typeutils.BaseRowSerializer.baseRowToBinary(BaseRowSerializer.java:250)
>   at 
> org.apache.flink.table.typeutils.BaseRowSerializer.serializeToPages(BaseRowSerializer.java:285)
>   at 
> org.apache.flink.table.typeutils.BaseRowSerializer.serializeToPages(BaseRowSerializer.java:55)
>   at 
> org.apache.flink.table.runtime.sort.BinaryInMemorySortBuffer.write(BinaryInMemorySortBuffer.java:190)
>   at 
> org.apache.flink.table.runtime.sort.BinaryExternalSorter.write(BinaryExternalSorter.java:540)
>   ... 10 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> java.util.Collections$UnmodifiableCollection.add(Collections.java:1055)
>   at 
> com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:109)
>   at 
> com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:22)
>   at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   ... 27 more
> {code}
> the exception could also appears in a simple ITCase in attachments.
> I find similar problems in [How to set Unmodifiable collection serializer of 
> Kryo in Spark 
> code|https://stackoverflow.com/questions/46818293/how-to-set-unmodifiable-collection-serializer-of-kryo-in-spark-code],
>  is there any way to set unmodifiable collection serializer of Kryo in at 
> present? 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-15077) Support Semi/Anti LookupJoin in Blink planner

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15077:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support Semi/Anti LookupJoin in Blink planner
> -
>
> Key: FLINK-15077
> URL: https://issues.apache.org/jira/browse/FLINK-15077
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Support the following sql in Blink planner:
> {code:sql}
> SELECT T.id, T.len, T.content FROM T WHERE T.id IN (
>   SELECT id FROM csvDim FOR SYSTEM_TIME AS OF PROCTIME() AS D)
> {code}
> {code:sql}
> SELECT T.id, T.len, T.content FROM T WHERE EXISTS (
>   SELECT * FROM csvDim FOR SYSTEM_TIME AS OF PROCTIME() AS D WHERE T.id = 
> D.id)
> {code}
> {code:sql}
> SELECT T.id, T.len, T.content FROM T WHERE NOT EXISTS (
>   SELECT * FROM csvDim FOR SYSTEM_TIME AS OF PROCTIME() AS D WHERE T.id = 
> D.id)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-statefun-playground] dependabot[bot] closed pull request #31: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/showcase

2022-07-15 Thread GitBox


dependabot[bot] closed pull request #31: Bump undertow-core from 1.4.18.Final 
to 2.2.11.Final in /java/showcase
URL: https://github.com/apache/flink-statefun-playground/pull/31


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun-playground] dependabot[bot] commented on pull request #31: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/showcase

2022-07-15 Thread GitBox


dependabot[bot] commented on PR #31:
URL: 
https://github.com/apache/flink-statefun-playground/pull/31#issuecomment-1185962838

   Superseded by #35.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun-playground] dependabot[bot] closed pull request #29: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/connected-components

2022-07-15 Thread GitBox


dependabot[bot] closed pull request #29: Bump undertow-core from 1.4.18.Final 
to 2.2.11.Final in /java/connected-components
URL: https://github.com/apache/flink-statefun-playground/pull/29


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun-playground] dependabot[bot] commented on pull request #32: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/greeter

2022-07-15 Thread GitBox


dependabot[bot] commented on PR #32:
URL: 
https://github.com/apache/flink-statefun-playground/pull/32#issuecomment-1185962699

   Superseded by #34.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun-playground] dependabot[bot] commented on pull request #30: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/shopping-cart

2022-07-15 Thread GitBox


dependabot[bot] commented on PR #30:
URL: 
https://github.com/apache/flink-statefun-playground/pull/30#issuecomment-1185962674

   Superseded by #33.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun-playground] dependabot[bot] commented on pull request #29: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/connected-components

2022-07-15 Thread GitBox


dependabot[bot] commented on PR #29:
URL: 
https://github.com/apache/flink-statefun-playground/pull/29#issuecomment-1185963023

   Superseded by #36.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun-playground] dependabot[bot] opened a new pull request, #36: Bump undertow-core from 1.4.18.Final to 2.2.15.Final in /java/connected-components

2022-07-15 Thread GitBox


dependabot[bot] opened a new pull request, #36:
URL: https://github.com/apache/flink-statefun-playground/pull/36

   Bumps [undertow-core](https://github.com/undertow-io/undertow) from 
1.4.18.Final to 2.2.15.Final.
   
   Commits
   
   https://github.com/undertow-io/undertow/commit/c0b0d337dbb56bbc5f5190dd9ba6579a8afbd17d;>c0b0d33
 Prepare 2.2.15.Final
   https://github.com/undertow-io/undertow/commit/a3aebce60e57f83b02aa847c50d6e372b9a9b9b5;>a3aebce
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1290;>#1290
 from carterkozak/UNDERTOW-2019
   https://github.com/undertow-io/undertow/commit/8531ff7bed158518a0a9c0f39c0b55f8528b;>8531ff7
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1277;>#1277
 from gaol/test_undertow-1981
   https://github.com/undertow-io/undertow/commit/cda3aae8461d64c6494c2b9f73d4f42521aff925;>cda3aae
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1289;>#1289
 from carterkozak/UNDERTOW-2018
   https://github.com/undertow-io/undertow/commit/19a17fe98af0075f6838323e3c858532ca2eb9bf;>19a17fe
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1288;>#1288
 from carterkozak/UNDERTOW-2017
   https://github.com/undertow-io/undertow/commit/c5298bd9cdf685970c21cfe57ebfd509a9a4a34a;>c5298bd
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1283;>#1283
 from baranowb/UNDERTOW-2012
   https://github.com/undertow-io/undertow/commit/ad3c5dbd975e0368741582a693092d819c2ff61f;>ad3c5db
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1281;>#1281
 from Ortus-Solutions/dev/undertow-2011
   https://github.com/undertow-io/undertow/commit/afb3e12a5220dd87d18fe4d281e4e49c46dccedc;>afb3e12
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1280;>#1280
 from aogburn/03102603
   https://github.com/undertow-io/undertow/commit/cd40388a12a69801c77eec39d3f989baddf648dd;>cd40388
 [UNDERTOW-2025][UNDERTOW-1981] Test that client cannot access files inside 
of...
   https://github.com/undertow-io/undertow/commit/7b5681b383899a1465d30fbe1ffc14fda6e94af1;>7b5681b
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1274;>#1274
 from baranowb/UNDERTOW-1994
   Additional commits viewable in https://github.com/undertow-io/undertow/compare/1.4.18.Final...2.2.15.Final;>compare
 view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=io.undertow:undertow-core=maven=1.4.18.Final=2.2.15.Final)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   - `@dependabot use these labels` will set the current labels as the default 
for future PRs for this repo and language
   - `@dependabot use these reviewers` will set the current reviewers as the 
default for future PRs for this repo and language
   - `@dependabot use these assignees` will set the current assignees as the 
default for future PRs for this repo and language
   - `@dependabot use this milestone` will set the current milestone as the 
default for future PRs for this repo and language
   
   You can disable automated security fix PRs for this repo from the [Security 
Alerts 

[GitHub] [flink-statefun-playground] dependabot[bot] opened a new pull request, #34: Bump undertow-core from 1.4.18.Final to 2.2.15.Final in /java/greeter

2022-07-15 Thread GitBox


dependabot[bot] opened a new pull request, #34:
URL: https://github.com/apache/flink-statefun-playground/pull/34

   Bumps [undertow-core](https://github.com/undertow-io/undertow) from 
1.4.18.Final to 2.2.15.Final.
   
   Commits
   
   https://github.com/undertow-io/undertow/commit/c0b0d337dbb56bbc5f5190dd9ba6579a8afbd17d;>c0b0d33
 Prepare 2.2.15.Final
   https://github.com/undertow-io/undertow/commit/a3aebce60e57f83b02aa847c50d6e372b9a9b9b5;>a3aebce
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1290;>#1290
 from carterkozak/UNDERTOW-2019
   https://github.com/undertow-io/undertow/commit/8531ff7bed158518a0a9c0f39c0b55f8528b;>8531ff7
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1277;>#1277
 from gaol/test_undertow-1981
   https://github.com/undertow-io/undertow/commit/cda3aae8461d64c6494c2b9f73d4f42521aff925;>cda3aae
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1289;>#1289
 from carterkozak/UNDERTOW-2018
   https://github.com/undertow-io/undertow/commit/19a17fe98af0075f6838323e3c858532ca2eb9bf;>19a17fe
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1288;>#1288
 from carterkozak/UNDERTOW-2017
   https://github.com/undertow-io/undertow/commit/c5298bd9cdf685970c21cfe57ebfd509a9a4a34a;>c5298bd
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1283;>#1283
 from baranowb/UNDERTOW-2012
   https://github.com/undertow-io/undertow/commit/ad3c5dbd975e0368741582a693092d819c2ff61f;>ad3c5db
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1281;>#1281
 from Ortus-Solutions/dev/undertow-2011
   https://github.com/undertow-io/undertow/commit/afb3e12a5220dd87d18fe4d281e4e49c46dccedc;>afb3e12
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1280;>#1280
 from aogburn/03102603
   https://github.com/undertow-io/undertow/commit/cd40388a12a69801c77eec39d3f989baddf648dd;>cd40388
 [UNDERTOW-2025][UNDERTOW-1981] Test that client cannot access files inside 
of...
   https://github.com/undertow-io/undertow/commit/7b5681b383899a1465d30fbe1ffc14fda6e94af1;>7b5681b
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1274;>#1274
 from baranowb/UNDERTOW-1994
   Additional commits viewable in https://github.com/undertow-io/undertow/compare/1.4.18.Final...2.2.15.Final;>compare
 view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=io.undertow:undertow-core=maven=1.4.18.Final=2.2.15.Final)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   - `@dependabot use these labels` will set the current labels as the default 
for future PRs for this repo and language
   - `@dependabot use these reviewers` will set the current reviewers as the 
default for future PRs for this repo and language
   - `@dependabot use these assignees` will set the current assignees as the 
default for future PRs for this repo and language
   - `@dependabot use this milestone` will set the current milestone as the 
default for future PRs for this repo and language
   
   You can disable automated security fix PRs for this repo from the [Security 
Alerts 

[GitHub] [flink-statefun-playground] dependabot[bot] opened a new pull request, #33: Bump undertow-core from 1.4.18.Final to 2.2.15.Final in /java/shopping-cart

2022-07-15 Thread GitBox


dependabot[bot] opened a new pull request, #33:
URL: https://github.com/apache/flink-statefun-playground/pull/33

   Bumps [undertow-core](https://github.com/undertow-io/undertow) from 
1.4.18.Final to 2.2.15.Final.
   
   Commits
   
   https://github.com/undertow-io/undertow/commit/c0b0d337dbb56bbc5f5190dd9ba6579a8afbd17d;>c0b0d33
 Prepare 2.2.15.Final
   https://github.com/undertow-io/undertow/commit/a3aebce60e57f83b02aa847c50d6e372b9a9b9b5;>a3aebce
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1290;>#1290
 from carterkozak/UNDERTOW-2019
   https://github.com/undertow-io/undertow/commit/8531ff7bed158518a0a9c0f39c0b55f8528b;>8531ff7
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1277;>#1277
 from gaol/test_undertow-1981
   https://github.com/undertow-io/undertow/commit/cda3aae8461d64c6494c2b9f73d4f42521aff925;>cda3aae
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1289;>#1289
 from carterkozak/UNDERTOW-2018
   https://github.com/undertow-io/undertow/commit/19a17fe98af0075f6838323e3c858532ca2eb9bf;>19a17fe
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1288;>#1288
 from carterkozak/UNDERTOW-2017
   https://github.com/undertow-io/undertow/commit/c5298bd9cdf685970c21cfe57ebfd509a9a4a34a;>c5298bd
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1283;>#1283
 from baranowb/UNDERTOW-2012
   https://github.com/undertow-io/undertow/commit/ad3c5dbd975e0368741582a693092d819c2ff61f;>ad3c5db
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1281;>#1281
 from Ortus-Solutions/dev/undertow-2011
   https://github.com/undertow-io/undertow/commit/afb3e12a5220dd87d18fe4d281e4e49c46dccedc;>afb3e12
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1280;>#1280
 from aogburn/03102603
   https://github.com/undertow-io/undertow/commit/cd40388a12a69801c77eec39d3f989baddf648dd;>cd40388
 [UNDERTOW-2025][UNDERTOW-1981] Test that client cannot access files inside 
of...
   https://github.com/undertow-io/undertow/commit/7b5681b383899a1465d30fbe1ffc14fda6e94af1;>7b5681b
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1274;>#1274
 from baranowb/UNDERTOW-1994
   Additional commits viewable in https://github.com/undertow-io/undertow/compare/1.4.18.Final...2.2.15.Final;>compare
 view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=io.undertow:undertow-core=maven=1.4.18.Final=2.2.15.Final)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   - `@dependabot use these labels` will set the current labels as the default 
for future PRs for this repo and language
   - `@dependabot use these reviewers` will set the current reviewers as the 
default for future PRs for this repo and language
   - `@dependabot use these assignees` will set the current assignees as the 
default for future PRs for this repo and language
   - `@dependabot use this milestone` will set the current milestone as the 
default for future PRs for this repo and language
   
   You can disable automated security fix PRs for this repo from the [Security 
Alerts 

[GitHub] [flink-statefun-playground] dependabot[bot] opened a new pull request, #35: Bump undertow-core from 1.4.18.Final to 2.2.15.Final in /java/showcase

2022-07-15 Thread GitBox


dependabot[bot] opened a new pull request, #35:
URL: https://github.com/apache/flink-statefun-playground/pull/35

   Bumps [undertow-core](https://github.com/undertow-io/undertow) from 
1.4.18.Final to 2.2.15.Final.
   
   Commits
   
   https://github.com/undertow-io/undertow/commit/c0b0d337dbb56bbc5f5190dd9ba6579a8afbd17d;>c0b0d33
 Prepare 2.2.15.Final
   https://github.com/undertow-io/undertow/commit/a3aebce60e57f83b02aa847c50d6e372b9a9b9b5;>a3aebce
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1290;>#1290
 from carterkozak/UNDERTOW-2019
   https://github.com/undertow-io/undertow/commit/8531ff7bed158518a0a9c0f39c0b55f8528b;>8531ff7
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1277;>#1277
 from gaol/test_undertow-1981
   https://github.com/undertow-io/undertow/commit/cda3aae8461d64c6494c2b9f73d4f42521aff925;>cda3aae
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1289;>#1289
 from carterkozak/UNDERTOW-2018
   https://github.com/undertow-io/undertow/commit/19a17fe98af0075f6838323e3c858532ca2eb9bf;>19a17fe
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1288;>#1288
 from carterkozak/UNDERTOW-2017
   https://github.com/undertow-io/undertow/commit/c5298bd9cdf685970c21cfe57ebfd509a9a4a34a;>c5298bd
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1283;>#1283
 from baranowb/UNDERTOW-2012
   https://github.com/undertow-io/undertow/commit/ad3c5dbd975e0368741582a693092d819c2ff61f;>ad3c5db
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1281;>#1281
 from Ortus-Solutions/dev/undertow-2011
   https://github.com/undertow-io/undertow/commit/afb3e12a5220dd87d18fe4d281e4e49c46dccedc;>afb3e12
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1280;>#1280
 from aogburn/03102603
   https://github.com/undertow-io/undertow/commit/cd40388a12a69801c77eec39d3f989baddf648dd;>cd40388
 [UNDERTOW-2025][UNDERTOW-1981] Test that client cannot access files inside 
of...
   https://github.com/undertow-io/undertow/commit/7b5681b383899a1465d30fbe1ffc14fda6e94af1;>7b5681b
 Merge pull request https://github-redirect.dependabot.com/undertow-io/undertow/issues/1274;>#1274
 from baranowb/UNDERTOW-1994
   Additional commits viewable in https://github.com/undertow-io/undertow/compare/1.4.18.Final...2.2.15.Final;>compare
 view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=io.undertow:undertow-core=maven=1.4.18.Final=2.2.15.Final)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing it manually
   - `@dependabot ignore this major version` will close this PR and stop 
Dependabot creating any more for this major version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this minor version` will close this PR and stop 
Dependabot creating any more for this minor version (unless you reopen the PR 
or upgrade to it yourself)
   - `@dependabot ignore this dependency` will close this PR and stop 
Dependabot creating any more for this dependency (unless you reopen the PR or 
upgrade to it yourself)
   - `@dependabot use these labels` will set the current labels as the default 
for future PRs for this repo and language
   - `@dependabot use these reviewers` will set the current reviewers as the 
default for future PRs for this repo and language
   - `@dependabot use these assignees` will set the current assignees as the 
default for future PRs for this repo and language
   - `@dependabot use this milestone` will set the current milestone as the 
default for future PRs for this repo and language
   
   You can disable automated security fix PRs for this repo from the [Security 
Alerts 

[GitHub] [flink-statefun-playground] dependabot[bot] closed pull request #32: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/greeter

2022-07-15 Thread GitBox


dependabot[bot] closed pull request #32: Bump undertow-core from 1.4.18.Final 
to 2.2.11.Final in /java/greeter
URL: https://github.com/apache/flink-statefun-playground/pull/32


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun-playground] dependabot[bot] closed pull request #30: Bump undertow-core from 1.4.18.Final to 2.2.11.Final in /java/shopping-cart

2022-07-15 Thread GitBox


dependabot[bot] closed pull request #30: Bump undertow-core from 1.4.18.Final 
to 2.2.11.Final in /java/shopping-cart
URL: https://github.com/apache/flink-statefun-playground/pull/30


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20287: Bump aws-java-sdk-s3 from 1.12.7 to 1.12.261 in /flink-connectors/flink-connector-kinesis

2022-07-15 Thread GitBox


flinkbot commented on PR #20287:
URL: https://github.com/apache/flink/pull/20287#issuecomment-1185953861

   
   ## CI report:
   
   * eda1b8daf6f54fdca6bd276624b564116030665e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20286: Bump aws-java-sdk-s3 from 1.11.951 to 1.12.261 in /flink-filesystems/flink-s3-fs-base

2022-07-15 Thread GitBox


flinkbot commented on PR #20286:
URL: https://github.com/apache/flink/pull/20286#issuecomment-1185950414

   
   ## CI report:
   
   * 801141d5cd7a833e1fcdc1409f41cbbc5419ab12 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #20285: Bump aws-java-sdk-s3 from 1.11.171 to 1.12.261 in /flink-yarn

2022-07-15 Thread GitBox


flinkbot commented on PR #20285:
URL: https://github.com/apache/flink/pull/20285#issuecomment-1185950009

   
   ## CI report:
   
   * 2edaa8a696933cc2a1ed21c75dd1ebf59e920e3d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dependabot[bot] opened a new pull request, #20287: Bump aws-java-sdk-s3 from 1.12.7 to 1.12.261 in /flink-connectors/flink-connector-kinesis

2022-07-15 Thread GitBox


dependabot[bot] opened a new pull request, #20287:
URL: https://github.com/apache/flink/pull/20287

   Bumps [aws-java-sdk-s3](https://github.com/aws/aws-sdk-java) from 1.12.7 to 
1.12.261.
   
   Changelog
   Sourced from https://github.com/aws/aws-sdk-java/blob/master/CHANGELOG.md;>aws-java-sdk-s3's
 changelog.
   
   1.12.261 2022-07-14
   AWS Config
   
   
   Features
   
   Update ResourceType enum with values for Route53Resolver, Batch, DMS, 
Workspaces, Stepfunctions, SageMaker, ElasticLoadBalancingV2, MSK types
   
   
   
   AWS Glue
   
   
   Features
   
   This release adds an additional worker type for Glue Streaming jobs.
   
   
   
   AWS Outposts
   
   
   Features
   
   This release adds the ShipmentInformation and AssetInformationList 
fields to the GetOrder API response.
   
   
   
   AWSKendraFrontendService
   
   
   Features
   
   This release adds AccessControlConfigurations which allow you to 
redefine your document level access control without the need for content 
re-indexing.
   
   
   
   Amazon Athena
   
   
   Features
   
   This release updates data types that contain either QueryExecutionId, 
NamedQueryId or ExpectedBucketOwner. Ids must be between 1 and 128 characters 
and contain only non-whitespace characters. ExpectedBucketOwner must be 
12-digit string.
   
   
   
   Amazon Elastic Compute Cloud
   
   
   Features
   
   This release adds flow logs for Transit Gateway to  allow customers to 
gain deeper visibility and insights into network traffic through their Transit 
Gateways.
   
   
   
   Amazon S3
   
   
   Bugfixes
   
   Fixed possible issue in TransferManager's downloadDirectory operation 
where files could be downloaded to some sibling directories of the destination 
directory if the key contained specially-crafted relative paths.
   
   
   
   Amazon SageMaker Service
   
   
   Features
   
   This release adds support for G5, P4d, and C6i instance types in Amazon 
SageMaker Inference and increases the number of hyperparameters that can be 
searched from 20 to 30 in Amazon SageMaker Automatic Model Tuning
   
   
   
   AmazonNimbleStudio
   
   
   Features
   
   Amazon Nimble Studio adds support for IAM-based access to AWS resources 
for Nimble Studio components and custom studio components. Studio Component 
scripts use these roles on Nimble Studio workstation to mount filesystems, 
access S3 buckets, or other configured resources in the Studio's AWS 
account
   
   
   
   CodeArtifact
   
   
   Features
   
   This release introduces Package Origin Controls, a mechanism used to 
counteract Dependency Confusion attacks. Adds two new APIs, 
PutPackageOriginConfiguration and DescribePackage, and updates the ListPackage, 
DescribePackageVersion and ListPackageVersion APIs in support of the 
feature.
   
   
   
   Firewall Management Service
   
   
   Features
   
   Adds support for strict ordering in stateful rule groups in Network 
Firewall policies.
   
   
   
   Inspector2
   
   
   Features
   
   This release adds support for Inspector V2 scan configurations through 
the get and update configuration APIs. Currently this allows configuring ECR 
automated re-scan duration to lifetime or 180 days or 30 days.
   
   
   
   1.12.260 2022-07-13
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/aws/aws-sdk-java/commit/cb66c50c885566d9a7fce837a68dc48ddced8a9a;>cb66c50
 AWS SDK for Java 1.12.261
   https://github.com/aws/aws-sdk-java/commit/685134e62738c81362ef1cb4686b29583c15a582;>685134e
 Update GitHub version number to 1.12.261-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/d847a882d4c3094b00ac7cec21a16cf01a3e;>d84
 AWS SDK for Java 1.12.260
   https://github.com/aws/aws-sdk-java/commit/ae88c8aa4b195047b94c154897475f85642d7eb3;>ae88c8a
 Update GitHub version number to 1.12.260-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/93a0a7fbe80901b6d8795c945054ee258b72;>93a0a7f
 AWS SDK for Java 1.12.259
   https://github.com/aws/aws-sdk-java/commit/5ec7cb70ed9144dd19e928c1da9b7be082024c52;>5ec7cb7
 Update GitHub version number to 1.12.259-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/75fe4e18d2c86da666c2ce134401c750ca6edf62;>75fe4e1
 AWS SDK for Java 1.12.258
   https://github.com/aws/aws-sdk-java/commit/8b6bdb02d83bbe3ae6e9edd11ee9ec7bca84eaa2;>8b6bdb0
 Update GitHub version number to 1.12.258-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/eba6423032bdf50e64f8756dbe521a8572917122;>eba6423
 AWS SDK for Java 1.12.257
   https://github.com/aws/aws-sdk-java/commit/d2f0b051d13d5531b0f810c568e2444c6cd590e6;>d2f0b05
 Update GitHub version number to 1.12.257-SNAPSHOT
   Additional commits viewable in https://github.com/aws/aws-sdk-java/compare/1.12.7...1.12.261;>compare 
view
   
   
   
   
   
   [![Dependabot compatibility 

[GitHub] [flink] dependabot[bot] opened a new pull request, #20286: Bump aws-java-sdk-s3 from 1.11.951 to 1.12.261 in /flink-filesystems/flink-s3-fs-base

2022-07-15 Thread GitBox


dependabot[bot] opened a new pull request, #20286:
URL: https://github.com/apache/flink/pull/20286

   Bumps [aws-java-sdk-s3](https://github.com/aws/aws-sdk-java) from 1.11.951 
to 1.12.261.
   
   Changelog
   Sourced from https://github.com/aws/aws-sdk-java/blob/master/CHANGELOG.md;>aws-java-sdk-s3's
 changelog.
   
   1.12.261 2022-07-14
   AWS Config
   
   
   Features
   
   Update ResourceType enum with values for Route53Resolver, Batch, DMS, 
Workspaces, Stepfunctions, SageMaker, ElasticLoadBalancingV2, MSK types
   
   
   
   AWS Glue
   
   
   Features
   
   This release adds an additional worker type for Glue Streaming jobs.
   
   
   
   AWS Outposts
   
   
   Features
   
   This release adds the ShipmentInformation and AssetInformationList 
fields to the GetOrder API response.
   
   
   
   AWSKendraFrontendService
   
   
   Features
   
   This release adds AccessControlConfigurations which allow you to 
redefine your document level access control without the need for content 
re-indexing.
   
   
   
   Amazon Athena
   
   
   Features
   
   This release updates data types that contain either QueryExecutionId, 
NamedQueryId or ExpectedBucketOwner. Ids must be between 1 and 128 characters 
and contain only non-whitespace characters. ExpectedBucketOwner must be 
12-digit string.
   
   
   
   Amazon Elastic Compute Cloud
   
   
   Features
   
   This release adds flow logs for Transit Gateway to  allow customers to 
gain deeper visibility and insights into network traffic through their Transit 
Gateways.
   
   
   
   Amazon S3
   
   
   Bugfixes
   
   Fixed possible issue in TransferManager's downloadDirectory operation 
where files could be downloaded to some sibling directories of the destination 
directory if the key contained specially-crafted relative paths.
   
   
   
   Amazon SageMaker Service
   
   
   Features
   
   This release adds support for G5, P4d, and C6i instance types in Amazon 
SageMaker Inference and increases the number of hyperparameters that can be 
searched from 20 to 30 in Amazon SageMaker Automatic Model Tuning
   
   
   
   AmazonNimbleStudio
   
   
   Features
   
   Amazon Nimble Studio adds support for IAM-based access to AWS resources 
for Nimble Studio components and custom studio components. Studio Component 
scripts use these roles on Nimble Studio workstation to mount filesystems, 
access S3 buckets, or other configured resources in the Studio's AWS 
account
   
   
   
   CodeArtifact
   
   
   Features
   
   This release introduces Package Origin Controls, a mechanism used to 
counteract Dependency Confusion attacks. Adds two new APIs, 
PutPackageOriginConfiguration and DescribePackage, and updates the ListPackage, 
DescribePackageVersion and ListPackageVersion APIs in support of the 
feature.
   
   
   
   Firewall Management Service
   
   
   Features
   
   Adds support for strict ordering in stateful rule groups in Network 
Firewall policies.
   
   
   
   Inspector2
   
   
   Features
   
   This release adds support for Inspector V2 scan configurations through 
the get and update configuration APIs. Currently this allows configuring ECR 
automated re-scan duration to lifetime or 180 days or 30 days.
   
   
   
   1.12.260 2022-07-13
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/aws/aws-sdk-java/commit/cb66c50c885566d9a7fce837a68dc48ddced8a9a;>cb66c50
 AWS SDK for Java 1.12.261
   https://github.com/aws/aws-sdk-java/commit/685134e62738c81362ef1cb4686b29583c15a582;>685134e
 Update GitHub version number to 1.12.261-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/d847a882d4c3094b00ac7cec21a16cf01a3e;>d84
 AWS SDK for Java 1.12.260
   https://github.com/aws/aws-sdk-java/commit/ae88c8aa4b195047b94c154897475f85642d7eb3;>ae88c8a
 Update GitHub version number to 1.12.260-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/93a0a7fbe80901b6d8795c945054ee258b72;>93a0a7f
 AWS SDK for Java 1.12.259
   https://github.com/aws/aws-sdk-java/commit/5ec7cb70ed9144dd19e928c1da9b7be082024c52;>5ec7cb7
 Update GitHub version number to 1.12.259-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/75fe4e18d2c86da666c2ce134401c750ca6edf62;>75fe4e1
 AWS SDK for Java 1.12.258
   https://github.com/aws/aws-sdk-java/commit/8b6bdb02d83bbe3ae6e9edd11ee9ec7bca84eaa2;>8b6bdb0
 Update GitHub version number to 1.12.258-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/eba6423032bdf50e64f8756dbe521a8572917122;>eba6423
 AWS SDK for Java 1.12.257
   https://github.com/aws/aws-sdk-java/commit/d2f0b051d13d5531b0f810c568e2444c6cd590e6;>d2f0b05
 Update GitHub version number to 1.12.257-SNAPSHOT
   Additional commits viewable in https://github.com/aws/aws-sdk-java/compare/1.11.951...1.12.261;>compare 
view
   
   
   
   
   
   [![Dependabot compatibility 

[GitHub] [flink] dependabot[bot] opened a new pull request, #20285: Bump aws-java-sdk-s3 from 1.11.171 to 1.12.261 in /flink-yarn

2022-07-15 Thread GitBox


dependabot[bot] opened a new pull request, #20285:
URL: https://github.com/apache/flink/pull/20285

   Bumps [aws-java-sdk-s3](https://github.com/aws/aws-sdk-java) from 1.11.171 
to 1.12.261.
   
   Changelog
   Sourced from https://github.com/aws/aws-sdk-java/blob/master/CHANGELOG.md;>aws-java-sdk-s3's
 changelog.
   
   1.12.261 2022-07-14
   AWS Config
   
   
   Features
   
   Update ResourceType enum with values for Route53Resolver, Batch, DMS, 
Workspaces, Stepfunctions, SageMaker, ElasticLoadBalancingV2, MSK types
   
   
   
   AWS Glue
   
   
   Features
   
   This release adds an additional worker type for Glue Streaming jobs.
   
   
   
   AWS Outposts
   
   
   Features
   
   This release adds the ShipmentInformation and AssetInformationList 
fields to the GetOrder API response.
   
   
   
   AWSKendraFrontendService
   
   
   Features
   
   This release adds AccessControlConfigurations which allow you to 
redefine your document level access control without the need for content 
re-indexing.
   
   
   
   Amazon Athena
   
   
   Features
   
   This release updates data types that contain either QueryExecutionId, 
NamedQueryId or ExpectedBucketOwner. Ids must be between 1 and 128 characters 
and contain only non-whitespace characters. ExpectedBucketOwner must be 
12-digit string.
   
   
   
   Amazon Elastic Compute Cloud
   
   
   Features
   
   This release adds flow logs for Transit Gateway to  allow customers to 
gain deeper visibility and insights into network traffic through their Transit 
Gateways.
   
   
   
   Amazon S3
   
   
   Bugfixes
   
   Fixed possible issue in TransferManager's downloadDirectory operation 
where files could be downloaded to some sibling directories of the destination 
directory if the key contained specially-crafted relative paths.
   
   
   
   Amazon SageMaker Service
   
   
   Features
   
   This release adds support for G5, P4d, and C6i instance types in Amazon 
SageMaker Inference and increases the number of hyperparameters that can be 
searched from 20 to 30 in Amazon SageMaker Automatic Model Tuning
   
   
   
   AmazonNimbleStudio
   
   
   Features
   
   Amazon Nimble Studio adds support for IAM-based access to AWS resources 
for Nimble Studio components and custom studio components. Studio Component 
scripts use these roles on Nimble Studio workstation to mount filesystems, 
access S3 buckets, or other configured resources in the Studio's AWS 
account
   
   
   
   CodeArtifact
   
   
   Features
   
   This release introduces Package Origin Controls, a mechanism used to 
counteract Dependency Confusion attacks. Adds two new APIs, 
PutPackageOriginConfiguration and DescribePackage, and updates the ListPackage, 
DescribePackageVersion and ListPackageVersion APIs in support of the 
feature.
   
   
   
   Firewall Management Service
   
   
   Features
   
   Adds support for strict ordering in stateful rule groups in Network 
Firewall policies.
   
   
   
   Inspector2
   
   
   Features
   
   This release adds support for Inspector V2 scan configurations through 
the get and update configuration APIs. Currently this allows configuring ECR 
automated re-scan duration to lifetime or 180 days or 30 days.
   
   
   
   1.12.260 2022-07-13
   
   
   ... (truncated)
   
   
   Commits
   
   https://github.com/aws/aws-sdk-java/commit/cb66c50c885566d9a7fce837a68dc48ddced8a9a;>cb66c50
 AWS SDK for Java 1.12.261
   https://github.com/aws/aws-sdk-java/commit/685134e62738c81362ef1cb4686b29583c15a582;>685134e
 Update GitHub version number to 1.12.261-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/d847a882d4c3094b00ac7cec21a16cf01a3e;>d84
 AWS SDK for Java 1.12.260
   https://github.com/aws/aws-sdk-java/commit/ae88c8aa4b195047b94c154897475f85642d7eb3;>ae88c8a
 Update GitHub version number to 1.12.260-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/93a0a7fbe80901b6d8795c945054ee258b72;>93a0a7f
 AWS SDK for Java 1.12.259
   https://github.com/aws/aws-sdk-java/commit/5ec7cb70ed9144dd19e928c1da9b7be082024c52;>5ec7cb7
 Update GitHub version number to 1.12.259-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/75fe4e18d2c86da666c2ce134401c750ca6edf62;>75fe4e1
 AWS SDK for Java 1.12.258
   https://github.com/aws/aws-sdk-java/commit/8b6bdb02d83bbe3ae6e9edd11ee9ec7bca84eaa2;>8b6bdb0
 Update GitHub version number to 1.12.258-SNAPSHOT
   https://github.com/aws/aws-sdk-java/commit/eba6423032bdf50e64f8756dbe521a8572917122;>eba6423
 AWS SDK for Java 1.12.257
   https://github.com/aws/aws-sdk-java/commit/d2f0b051d13d5531b0f810c568e2444c6cd590e6;>d2f0b05
 Update GitHub version number to 1.12.257-SNAPSHOT
   Additional commits viewable in https://github.com/aws/aws-sdk-java/compare/1.11.171...1.12.261;>compare 
view
   
   
   
   
   
   [![Dependabot compatibility 

[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #278: [WIP] Add standalone mode support

2022-07-15 Thread GitBox


gyfora commented on code in PR #278:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922510295


##
flink-kubernetes-mock-shaded/pom.xml:
##
@@ -0,0 +1,154 @@
+

Review Comment:
   I completely removed the whole mock shaded module 
(https://github.com/gyfora/flink-kubernetes-operator/commit/db0fbcc3e97ce6c6c7eb9a4575ab1305fd26c34c)
 and everything seem to still work. All tests still pass.
   
   Do we still need this? What am I missing?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa commented on a diff in pull request #20271: [FLINK-28547][runtime] Add IT cases for SpeculativeScheduler.

2022-07-15 Thread GitBox


reswqa commented on code in PR #20271:
URL: https://github.com/apache/flink/pull/20271#discussion_r922441015


##
flink-tests/src/test/java/org/apache/flink/test/scheduling/SpeculativeSchedulerITCase.java:
##
@@ -0,0 +1,245 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.scheduling;
+
+import org.apache.flink.api.common.RuntimeExecutionMode;
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.JobManagerOptions;
+import org.apache.flink.configuration.MemorySize;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.configuration.RestartStrategyOptions;
+import org.apache.flink.configuration.SlowTaskDetectorOptions;
+import org.apache.flink.configuration.TaskManagerOptions;
+import org.apache.flink.core.execution.JobClient;
+import org.apache.flink.runtime.scheduler.adaptivebatch.SpeculativeScheduler;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.RichSinkFunction;
+
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.io.TempDir;
+
+import java.nio.file.Path;
+import java.time.Duration;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.LongStream;
+
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+/** IT cases for {@link SpeculativeScheduler}. */
+class SpeculativeSchedulerITCase {
+
+@TempDir private Path temporaryFolder;
+private static final int MAX_PARALLELISM = 4;
+private static final int NUMBERS_TO_PRODUCE = 1;
+private static final int FAILURE_COUNT = 20;
+
+private int parallelism;
+
+// the key is the subtask index so that different attempts will not add 
duplicated results
+private static ConcurrentMap> numberCountResults;
+
+private Map expectedResult;
+
+@BeforeEach
+void setUp() {
+parallelism = 4;
+
+expectedResult =
+LongStream.range(0, NUMBERS_TO_PRODUCE)
+.boxed()
+.collect(Collectors.toMap(Function.identity(), i -> 
1L));
+
+NumberCounterMap.toFailCounter.set(0);
+
+numberCountResults = new ConcurrentHashMap<>();
+}
+
+@Test
+void testSpeculativeExecution() throws Exception {
+executeJob();
+waitUntilJobArchived();
+checkResults();
+}
+
+@Test
+void testSpeculativeExecutionWithFailover() throws Exception {
+NumberCounterMap.toFailCounter.set(FAILURE_COUNT);
+executeJob();
+waitUntilJobArchived();
+checkResults();
+}
+
+@Test
+void testSpeculativeExecutionWithAdaptiveParallelism() throws Exception {
+parallelism = -1;
+executeJob();
+waitUntilJobArchived();
+checkResults();
+}
+
+@Test
+void testBlockSlowNodeInSpeculativeExecution() throws Exception {
+final Configuration configuration = new Configuration();
+configuration.set(JobManagerOptions.BLOCK_SLOW_NODE_DURATION, 
Duration.ofMinutes(1));
+JobClient client = executeJobAsync(configuration);
+
+assertThatThrownBy(
+() -> client.getJobExecutionResult().get(10, 
TimeUnit.SECONDS),
+"The local node is expected to be blocked but it is 
not.")
+.isInstanceOf(TimeoutException.class);
+}
+
+private void checkResults() {
+final Map numberCountResultMap =
+numberCountResults.values().stream()
+.flatMap(map -> map.entrySet().stream())
+.collect(
+   

[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #278: [WIP] Add standalone mode support

2022-07-15 Thread GitBox


gyfora commented on PR #278:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/278#issuecomment-1185818168

   Also I cannot seem to run some tests from IntelliJ:
   
   `Fabric8FlinkStandaloneKubeClientTest`
   ```
   Test ignored.
   
   Test ignored.
   
   java.lang.NoSuchMethodError: 'com.fasterxml.jackson.databind.ObjectMapper 
io.fabric8.kubernetes.client.utils.Serialization.jsonMapper()'
   ```
   Do you have any idea how to fix this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on a diff in pull request #278: [WIP] Add standalone mode support

2022-07-15 Thread GitBox


gyfora commented on code in PR #278:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/278#discussion_r922428482


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/service/StandaloneFlinkService.java:
##
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.operator.service;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.client.deployment.ClusterDeploymentException;
+import org.apache.flink.client.deployment.ClusterSpecification;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.kubernetes.KubernetesClusterClientFactory;
+import org.apache.flink.kubernetes.configuration.KubernetesConfigOptions;
+import org.apache.flink.kubernetes.operator.config.FlinkConfigManager;
+import org.apache.flink.kubernetes.operator.crd.FlinkDeployment;
+import org.apache.flink.kubernetes.operator.crd.spec.JobSpec;
+import org.apache.flink.kubernetes.operator.crd.spec.UpgradeMode;
+import org.apache.flink.kubernetes.operator.crd.status.FlinkDeploymentStatus;
+import 
org.apache.flink.kubernetes.operator.kubeclient.Fabric8FlinkStandaloneKubeClient;
+import 
org.apache.flink.kubernetes.operator.kubeclient.FlinkStandaloneKubeClient;
+import 
org.apache.flink.kubernetes.operator.standalone.KubernetesStandaloneClusterDescriptor;
+import org.apache.flink.kubernetes.operator.utils.StandaloneKubernetesUtils;
+import org.apache.flink.kubernetes.utils.KubernetesUtils;
+import org.apache.flink.util.concurrent.ExecutorThreadFactory;
+
+import io.fabric8.kubernetes.api.model.ObjectMeta;
+import io.fabric8.kubernetes.api.model.PodList;
+import io.fabric8.kubernetes.client.KubernetesClient;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import static 
org.apache.flink.kubernetes.utils.Constants.LABEL_CONFIGMAP_TYPE_HIGH_AVAILABILITY;
+
+/**
+ * Implementation of {@link FlinkService} submitting and interacting with 
Standalone Kubernetes
+ * Flink clusters and jobs.
+ */
+public class StandaloneFlinkService extends AbstractFlinkService {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(StandaloneFlinkService.class);
+
+public StandaloneFlinkService(
+KubernetesClient kubernetesClient, FlinkConfigManager 
configManager) {
+super(kubernetesClient, configManager);
+}
+
+@Override
+protected void deployApplicationCluster(JobSpec jobSpec, Configuration 
conf) throws Exception {
+LOG.info("Deploying application cluster");
+submitClusterInternal(conf);
+LOG.info("Application cluster successfully deployed");
+}
+
+@Override
+public void submitSessionCluster(Configuration conf) throws Exception {
+LOG.info("Deploying session cluster");
+submitClusterInternal(conf);
+LOG.info("Session cluster successfully deployed");
+}
+
+@Override
+public void cancelJob(FlinkDeployment deployment, UpgradeMode upgradeMode, 
Configuration conf)
+throws Exception {
+cancelJob(deployment, upgradeMode, conf, true);
+}
+
+@Override
+public void deleteClusterDeployment(
+ObjectMeta meta, FlinkDeploymentStatus status, boolean 
deleteHaData) {
+deleteClusterInternal(meta, deleteHaData);
+}
+
+@Override
+protected PodList getJmPodList(String namespace, String clusterId) {
+return kubernetesClient
+.pods()
+.inNamespace(namespace)
+
.withLabels(StandaloneKubernetesUtils.getJobManagerSelectors(clusterId))
+.list();
+}
+
+@VisibleForTesting
+protected FlinkStandaloneKubeClient createNamespacedKubeClient(
+Configuration configuration, String namespace) {
+final int poolSize =
+
configuration.get(KubernetesConfigOptions.KUBERNETES_CLIENT_IO_EXECUTOR_POOL_SIZE);
+
+ExecutorService executorService =
+Executors.newFixedThreadPool(
+poolSize,
+new 

[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #278: [WIP] Add standalone mode support

2022-07-15 Thread GitBox


gyfora commented on PR #278:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/278#issuecomment-1185789710

   I hit a few issues during my initial local testing:
   
   1. Had a maven test failure:
   `StandaloneKubernetesJobManagerFactoryTest.testFlinkConfConfigMap:269 
expected: <1> but was: <2>`
   
   2. Noticed that after running the tests some resources were deployed to my 
local minikube and kind of messed up the operator state there : 
`flink-operator-test/test-session-cluster`
   
   3. The docker build seems to fail in a fresh minikube env:
   
   ```
   > [build 4/5] RUN --mount=type=cache,target=/root/.m2 mvn -ntp clean install 
-pl !flink-kubernetes-docs -DskipTests=true:
   #14 0.960 [INFO] Scanning for projects...
   #14 76.22 [ERROR] [ERROR] Some problems were encountered while processing 
the POMs:
   #14 76.22 [FATAL] Non-resolvable parent POM for 
org.apache.flink:flink-kubernetes-operator-parent:1.1-SNAPSHOT: Could not 
transfer artifact org.apache:apache:pom:23 from/to central 
(https://repo.maven.apache.org/maven2): transfer failed for 
https://repo.maven.apache.org/maven2/org/apache/apache/23/apache-23.pom and 
'parent.relativePath' points at wrong local POM @ line 22, column 13
   #14 76.22  @
   #14 76.22 [ERROR] The build could not read 1 project -> [Help 1]
   #14 76.22 [ERROR]
   #14 76.22 [ERROR]   The project 
org.apache.flink:flink-kubernetes-operator-parent:1.1-SNAPSHOT (/app/pom.xml) 
has 1 error
   #14 76.22 [ERROR] Non-resolvable parent POM for 
org.apache.flink:flink-kubernetes-operator-parent:1.1-SNAPSHOT: Could not 
transfer artifact org.apache:apache:pom:23 from/to central 
(https://repo.maven.apache.org/maven2): transfer failed for 
https://repo.maven.apache.org/maven2/org/apache/apache/23/apache-23.pom and 
'parent.relativePath' points at wrong local POM @ line 22, column 13: Connect 
to repo.maven.apache.org:443 [repo.maven.apache.org/199.232.80.215] failed: 
Connection refused (Connection refused) -> [Help 2]
   #14 76.22 [ERROR]
   #14 76.22 [ERROR] To see the full stack trace of the errors, re-run Maven 
with the -e switch.
   #14 76.23 [ERROR] Re-run Maven using the -X switch to enable full debug 
logging.
   #14 76.23 [ERROR]
   #14 76.23 [ERROR] For more information about the errors and possible 
solutions, please read the following articles:
   #14 76.23 [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
   #14 76.23 [ERROR] [Help 2] 
http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException
   --
   executor failed running [/bin/sh -c mvn -ntp clean install -pl 
!flink-kubernetes-docs -DskipTests=$SKIP_TESTS]: exit code: 1
   ```
   
   I think you missed copying the new module dirs in the Dockerfile for the 
build 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-24229) [FLIP-171] DynamoDB implementation of Async Sink

2022-07-15 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567340#comment-17567340
 ] 

Danny Cranmer commented on FLINK-24229:
---

I have published the FLIP [1], I look forward to your feedback.

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-252%3A+Amazon+DynamoDB+Sink+Connector

> [FLIP-171] DynamoDB implementation of Async Sink
> 
>
> Key: FLINK-24229
> URL: https://issues.apache.org/jira/browse/FLINK-24229
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Zichen Liu
>Assignee: Yuri Gusev
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.16.0
>
>
> h2. Motivation
> *User stories:*
>  As a Flink user, I’d like to use DynamoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for DynamoDB by inheriting the 
> AsyncSinkBase class. The implementation can for now reside in its own module 
> in flink-connectors.
>  * Implement an asynchornous sink writer for DynamoDB by extending the 
> AsyncSinkWriter. The implementation must deal with failed requests and retry 
> them using the {{requeueFailedRequestEntry}} method. If possible, the 
> implementation should batch multiple requests (PutRecordsRequestEntry 
> objects) to Firehose for increased throughput. The implemented Sink Writer 
> will be used by the Sink class that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing: add tests that hits a real AWS instance. (How to best 
> donate resources to the Flink project to allow this to happen?)
> h2. References
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] trushev commented on pull request #20281: [FLINK-28567][table-planner] Introduce predicate inference from one side of join to the other

2022-07-15 Thread GitBox


trushev commented on PR #20281:
URL: https://github.com/apache/flink/pull/20281#issuecomment-1185760075

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] qingwei91 commented on pull request #20235: [Flink 14101][Connectors][Jdbc] SQL Server dialect

2022-07-15 Thread GitBox


qingwei91 commented on PR #20235:
URL: https://github.com/apache/flink/pull/20235#issuecomment-1185740486

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] qingwei91 commented on a diff in pull request #20140: [Flink 16024][Connector][JDBC] Support FilterPushdown

2022-07-15 Thread GitBox


qingwei91 commented on code in PR #20140:
URL: https://github.com/apache/flink/pull/20140#discussion_r922363732


##
flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcFilterPushdownVisitor.java:
##
@@ -53,7 +52,6 @@
  * Visitor to walk a Expression AST. Produces a String that can be used to 
pushdown the filter,
  * return Optional.empty() if we cannot pushdown the filter.
  */
-@Public

Review Comment:
   Hi @JingGe, thanks for the comment.
   
   This is a file I newly added, and in my latest commit, I marked it as 
PublicEvolving: 
https://github.com/apache/flink/pull/20140/commits/6c97da0eb0d27774ed9954fbabed7b514ac8ce02
   
   Do you mean we cannot expose new API? 
   For instance, I added this here: 
https://github.com/apache/flink/pull/20140/files#diff-ae60653ffe2ac890a3c1b01da41405bcc4e6913949176c36edc009df5090c38fR157,
 which adds a new method and a new return type to `JdbcDialect` which is 
PublicEvolving, because filter pushdown might differ across JDBC dialect. Is 
this approach a problem?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-table-store] LadyForest closed pull request #219: [FLINK-28573] Nested type will lose nullability when converting from TableSchema

2022-07-15 Thread GitBox


LadyForest closed pull request #219: [FLINK-28573] Nested type will lose 
nullability when converting from TableSchema
URL: https://github.com/apache/flink-table-store/pull/219


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28573) Nested type will lose nullability when converting from TableSchema

2022-07-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28573:
---
Labels: pull-request-available  (was: )

> Nested type will lose nullability when converting from TableSchema
> --
>
> Key: FLINK-28573
> URL: https://issues.apache.org/jira/browse/FLINK-28573
> Project: Flink
>  Issue Type: Bug
>  Components: Table Store
>Affects Versions: table-store-0.2.0
>Reporter: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: table-store-0.2.0
>
>
> E.g. ArrayDataType, MultisetDataType etc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-table-store] LadyForest opened a new pull request, #219: [FLINK-28573] Nested type will lose nullability when converting from TableSchema

2022-07-15 Thread GitBox


LadyForest opened a new pull request, #219:
URL: https://github.com/apache/flink-table-store/pull/219

   Fix the nullability loss for `ArrayDataType`, `MultisetDataType`, 
`MapDataType` and `RowDataType`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28571) Add Chi-squared test as Transformer to ml.feature

2022-07-15 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28571:
---
Labels: pull-request-available  (was: )

> Add Chi-squared test as Transformer to ml.feature
> -
>
> Key: FLINK-28571
> URL: https://issues.apache.org/jira/browse/FLINK-28571
> Project: Flink
>  Issue Type: New Feature
>  Components: Library / Machine Learning
>Reporter: Simon Tao
>Priority: Major
>  Labels: pull-request-available
>
> Pearson's chi-squared 
> test:https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test
> For more information on 
> chi-squared:http://en.wikipedia.org/wiki/Chi-squared_test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] afedulov commented on pull request #20127: [FLINK-26270] Flink SQL write data to kafka by CSV format , whether d…

2022-07-15 Thread GitBox


afedulov commented on PR #20127:
URL: https://github.com/apache/flink/pull/20127#issuecomment-1185660987

   Hi @sunshineJK almost there. The comment on the option still needs to be 
fixed. The grammar is off (**e**nable**D** -> **E**nables), "of data of type 
Bigdecimal"  -> "of BigDecimal data type",  the datatypes needs fixing 
(Bigdecimal -> Big**D**ecimal), "as scientific notation" ->  "**in** scientific 
notation" and the comment is in general a bit too verbose. Did you consider the 
variant that I proposed in the last review?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-ml] taosiyuan163 opened a new pull request, #132: [FLINK-28571]Add Chi-squared test as Transformer to ml.feature

2022-07-15 Thread GitBox


taosiyuan163 opened a new pull request, #132:
URL: https://github.com/apache/flink-ml/pull/132

   ### What is the purpose of the change
   
   Add Chi-squared test as Transformer to ml.feature.
   
   The chi-square statistic is a useful tool for understanding the relationship 
between two categorical variables ,gives us a way to quantify and assess the 
strength of a given pair of categorical variables.
   
   Pearson's chi-squared 
test:https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test
   For more information on 
chi-squared:http://en.wikipedia.org/wiki/Chi-squared_test
   
   ### Brief change log
   Add Chi-squared test as Transformer to ml.feature.
   
   ### Verifying this change
   The changes are tested by unit tests in ChiSqTestTransformerTest.
   
   ### Does this pull request potentially affect one of the following parts:
   Dependencies (does it add or upgrade a dependency): (no)
   The public API, i.e., is any changed class annotated with @public(Evolving): 
(no)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-28573) Nested type will lose nullability when converting from TableSchema

2022-07-15 Thread Jane Chan (Jira)
Jane Chan created FLINK-28573:
-

 Summary: Nested type will lose nullability when converting from 
TableSchema
 Key: FLINK-28573
 URL: https://issues.apache.org/jira/browse/FLINK-28573
 Project: Flink
  Issue Type: Bug
  Components: Table Store
Affects Versions: table-store-0.2.0
Reporter: Jane Chan
 Fix For: table-store-0.2.0


E.g. ArrayDataType, MultisetDataType etc



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922247078


##
flink-formats/flink-protobuf/pom.xml:
##
@@ -0,0 +1,123 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-formats
+   1.16-SNAPSHOT
+   
+
+   flink-protobuf
+   Flink : Formats : Protobuf
+
+   jar
+
+   
+   
+   3.0.11
+   
+
+   
+   
+
+   
+   org.apache.flink
+   flink-core
+   ${project.version}
+   provided
+   
+
+   
+   org.apache.flink
+   flink-table-common
+   ${project.version}
+   provided
+   
+
+   
+   com.google.protobuf
+   protobuf-java
+   ${protoc.version}
+   
+
+   
+   org.codehaus.janino
+   janino
+   
+   ${janino.version}
+   provided
+   
+
+   
+   
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}
+   ${project.version}
+   test
+   
+
+   
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}
+   ${project.version}
+   test
+   test-jar
+   
+
+   
+   
+   org.apache.flink
+   flink-test-utils
+   ${project.version}
+   test
+   
+   
+
+   
+   
+   
+   com.github.os72
+   protoc-jar-maven-plugin
+   3.11.4
+   
+   
+   generate-sources
+   
+   run
+   
+   
+   
${protoc.version}

Review Comment:
   I have moved the output directory to target/test-proto-sources



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922244413


##
flink-formats/flink-protobuf/src/main/java/org/apache/flink/formats/protobuf/util/PbFormatUtils.java:
##
@@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.protobuf.util;
+
+import org.apache.flink.formats.protobuf.PbConstant;
+import org.apache.flink.table.types.logical.ArrayType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.MapType;
+
+import com.google.protobuf.Descriptors;
+
+/** Protobuf function util. */
+public class PbFormatUtils {
+
+/**
+ * protobuf code has a bug that, f_abc_7d will be converted to fAbc7d in 
{@link
+ * Descriptors.FieldDescriptor#getJsonName()}, but actually we need fAbc7D.
+ */
+public static String fieldNameToJsonName(String name) {

Review Comment:
   proto-java does contain such util function and I have added 
ProtobufInternalUtils#underScoreToCamelCase to access this function and I think 
this will solve this issue perfectly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922241944


##
flink-formats/flink-protobuf/pom.xml:
##
@@ -0,0 +1,123 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-formats
+   1.16-SNAPSHOT
+   
+
+   flink-protobuf
+   Flink : Formats : Protobuf
+
+   jar
+
+   
+   
+   3.0.11
+   
+
+   
+   
+
+   
+   org.apache.flink
+   flink-core
+   ${project.version}
+   provided
+   
+
+   
+   org.apache.flink
+   flink-table-common
+   ${project.version}
+   provided
+   
+
+   
+   com.google.protobuf
+   protobuf-java
+   ${protoc.version}
+   
+
+   
+   org.codehaus.janino
+   janino
+   
+   ${janino.version}
+   provided
+   
+
+   
+   
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}
+   ${project.version}
+   test
+   
+
+   
+   
+   org.apache.flink
+   
flink-table-planner_${scala.binary.version}

Review Comment:
   One is jar and another is test jar, there will be exception if I remove one 
of them.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922239131


##
flink-formats/flink-sql-protobuf/pom.xml:
##
@@ -0,0 +1,86 @@
+
+
+http://maven.apache.org/POM/4.0.0; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+   xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd;>
+   
+   4.0.0
+
+   
+   org.apache.flink
+   flink-formats
+   1.16-SNAPSHOT
+   
+
+   flink-sql-protobuf
+   Flink : Formats : SQL Protobuf
+
+   jar
+
+   
+   
+   org.apache.flink
+   flink-parquet
+   ${project.version}
+   
+   
+
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
org.apache.flink:flink-protobuf
+   
com.google.protobuf:protobuf-java
+   
+   
+   
+   
+   
+   
+
+   

Review Comment:
   Removed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922238897


##
flink-formats/flink-protobuf/src/main/java/org/apache/flink/formats/protobuf/util/PbCodegenUtils.java:
##
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.protobuf.util;
+
+import org.apache.flink.api.common.InvalidProgramException;
+import org.apache.flink.formats.protobuf.PbCodegenException;
+import org.apache.flink.formats.protobuf.PbConstant;
+import org.apache.flink.formats.protobuf.PbFormatContext;
+import org.apache.flink.formats.protobuf.serialize.PbCodegenSerializeFactory;
+import org.apache.flink.formats.protobuf.serialize.PbCodegenSerializer;
+import org.apache.flink.table.types.logical.LogicalType;
+
+import com.google.protobuf.Descriptors.FieldDescriptor;
+import org.codehaus.janino.SimpleCompiler;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/** Codegen utils only used in protobuf format. */
+public class PbCodegenUtils {
+private static final Logger LOG = 
LoggerFactory.getLogger(PbCodegenUtils.class);
+
+/**
+ * @param flinkContainerCode code phrase which represent flink container 
type like row/array in
+ * codegen sections
+ * @param index the index number in flink container type
+ * @param eleType the element type
+ */
+public static String flinkContainerElementCode(
+String flinkContainerCode, String index, LogicalType eleType) {
+switch (eleType.getTypeRoot()) {
+case INTEGER:
+return flinkContainerCode + ".getInt(" + index + ")";
+case BIGINT:
+return flinkContainerCode + ".getLong(" + index + ")";
+case FLOAT:
+return flinkContainerCode + ".getFloat(" + index + ")";
+case DOUBLE:
+return flinkContainerCode + ".getDouble(" + index + ")";
+case BOOLEAN:
+return flinkContainerCode + ".getBoolean(" + index + ")";
+case VARCHAR:
+case CHAR:
+return flinkContainerCode + ".getString(" + index + ")";
+case VARBINARY:
+case BINARY:
+return flinkContainerCode + ".getBinary(" + index + ")";
+case ROW:
+int size = eleType.getChildren().size();
+return flinkContainerCode + ".getRow(" + index + ", " + size + 
")";
+case MAP:
+return flinkContainerCode + ".getMap(" + index + ")";
+case ARRAY:
+return flinkContainerCode + ".getArray(" + index + ")";
+default:
+throw new IllegalArgumentException("Unsupported data type in 
schema: " + eleType);
+}
+}
+
+/**
+ * Get java type str from {@link FieldDescriptor} which directly fetched 
from protobuf object.
+ *
+ * @return The returned code phrase will be used as java type str in 
codegen sections.
+ * @throws PbCodegenException
+ */
+public static String getTypeStrFromProto(FieldDescriptor fd, boolean 
isList, String outerPrefix)
+throws PbCodegenException {
+String typeStr;
+switch (fd.getJavaType()) {
+case MESSAGE:
+if (fd.isMapField()) {
+// map
+FieldDescriptor keyFd =
+
fd.getMessageType().findFieldByName(PbConstant.PB_MAP_KEY_NAME);
+FieldDescriptor valueFd =
+
fd.getMessageType().findFieldByName(PbConstant.PB_MAP_VALUE_NAME);
+// key and value cannot be repeated
+String keyTypeStr = getTypeStrFromProto(keyFd, false, 
outerPrefix);
+String valueTypeStr = getTypeStrFromProto(valueFd, false, 
outerPrefix);
+typeStr = "Map<" + keyTypeStr + "," + valueTypeStr + ">";
+} else {
+// simple message
+typeStr = 
PbFormatUtils.getFullJavaName(fd.getMessageType(), outerPrefix);
+}
+break;
+case INT:
+

[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922237664


##
flink-formats/flink-protobuf/src/main/java/org/apache/flink/formats/protobuf/deserialize/ProtoToRowConverter.java:
##
@@ -0,0 +1,137 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.protobuf.deserialize;
+
+import org.apache.flink.formats.protobuf.PbCodegenException;
+import org.apache.flink.formats.protobuf.PbConstant;
+import org.apache.flink.formats.protobuf.PbFormatConfig;
+import org.apache.flink.formats.protobuf.PbFormatContext;
+import org.apache.flink.formats.protobuf.util.PbCodegenAppender;
+import org.apache.flink.formats.protobuf.util.PbCodegenUtils;
+import org.apache.flink.formats.protobuf.util.PbFormatUtils;
+import org.apache.flink.table.data.ArrayData;
+import org.apache.flink.table.data.GenericArrayData;
+import org.apache.flink.table.data.GenericMapData;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.binary.BinaryStringData;
+import org.apache.flink.table.types.logical.RowType;
+
+import com.google.protobuf.ByteString;
+import com.google.protobuf.Descriptors;
+import com.google.protobuf.Descriptors.FileDescriptor.Syntax;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.lang.reflect.Method;
+import java.nio.file.Files;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.UUID;
+
+/**
+ * {@link ProtoToRowConverter} can convert binary protobuf message data to 
flink row data by codegen
+ * process.
+ */
+public class ProtoToRowConverter {
+private static final Logger LOG = 
LoggerFactory.getLogger(ProtoToRowConverter.class);
+private final Method parseFromMethod;
+private final Method decodeMethod;
+
+public ProtoToRowConverter(RowType rowType, PbFormatConfig formatConfig)
+throws PbCodegenException {
+try {
+String outerPrefix =
+
PbFormatUtils.getOuterProtoPrefix(formatConfig.getMessageClassName());
+Descriptors.Descriptor descriptor =
+
PbFormatUtils.getDescriptor(formatConfig.getMessageClassName());
+Class messageClass =
+Class.forName(
+formatConfig.getMessageClassName(),
+true,
+Thread.currentThread().getContextClassLoader());
+String fullMessageClassName = 
PbFormatUtils.getFullJavaName(descriptor, outerPrefix);
+if (descriptor.getFile().getSyntax() == Syntax.PROTO3) {
+// pb3 always read default values
+formatConfig =
+new PbFormatConfig(
+formatConfig.getMessageClassName(),
+formatConfig.isIgnoreParseErrors(),
+true,
+formatConfig.getWriteNullStringLiterals());
+}
+PbCodegenAppender codegenAppender = new PbCodegenAppender();
+PbFormatContext pbFormatContext = new PbFormatContext(outerPrefix, 
formatConfig);
+String uuid = UUID.randomUUID().toString().replaceAll("\\-", "");
+String generatedClassName = "GeneratedProtoToRow_" + uuid;
+String generatedPackageName = 
ProtoToRowConverter.class.getPackage().getName();
+codegenAppender.appendLine("package " + generatedPackageName);
+codegenAppender.appendLine("import " + RowData.class.getName());
+codegenAppender.appendLine("import " + ArrayData.class.getName());
+codegenAppender.appendLine("import " + 
BinaryStringData.class.getName());
+codegenAppender.appendLine("import " + 
GenericRowData.class.getName());
+codegenAppender.appendLine("import " + 
GenericMapData.class.getName());
+codegenAppender.appendLine("import " + 
GenericArrayData.class.getName());
+codegenAppender.appendLine("import " + ArrayList.class.getName());
+   

[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922237403


##
flink-formats/flink-protobuf/src/main/java/org/apache/flink/formats/protobuf/PbFormatConfig.java:
##
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.protobuf;
+
+import java.io.Serializable;
+import java.util.Objects;
+
+import static 
org.apache.flink.formats.protobuf.PbFormatOptions.IGNORE_PARSE_ERRORS;
+import static 
org.apache.flink.formats.protobuf.PbFormatOptions.READ_DEFAULT_VALUES;
+import static 
org.apache.flink.formats.protobuf.PbFormatOptions.WRITE_NULL_STRING_LITERAL;
+
+/** Config of protobuf configs. */
+public class PbFormatConfig implements Serializable {

Review Comment:
   As the configs number in current options is small, and I have added prefix 
to indicate whether it is for read or write and also separate them in different 
secions.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] maosuhan commented on a diff in pull request #14376: [FLINK-18202][PB format] New Format of protobuf

2022-07-15 Thread GitBox


maosuhan commented on code in PR #14376:
URL: https://github.com/apache/flink/pull/14376#discussion_r922235724


##
flink-formats/flink-protobuf/src/main/java/org/apache/flink/formats/protobuf/PbFormatFactory.java:
##
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.formats.protobuf;
+
+import org.apache.flink.api.common.serialization.DeserializationSchema;
+import org.apache.flink.api.common.serialization.SerializationSchema;
+import org.apache.flink.configuration.ConfigOption;
+import org.apache.flink.configuration.ReadableConfig;
+import org.apache.flink.table.connector.format.DecodingFormat;
+import org.apache.flink.table.connector.format.EncodingFormat;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.factories.DeserializationFormatFactory;
+import org.apache.flink.table.factories.DynamicTableFactory;
+import org.apache.flink.table.factories.FactoryUtil;
+import org.apache.flink.table.factories.SerializationFormatFactory;
+
+import java.util.HashSet;
+import java.util.Set;
+
+/**
+ * Table format factory for providing configured instances of Protobuf to 
RowData {@link
+ * SerializationSchema} and {@link DeserializationSchema}.
+ */
+public class PbFormatFactory implements DeserializationFormatFactory, 
SerializationFormatFactory {
+
+public static final String IDENTIFIER = "protobuf";
+
+@Override
+public DecodingFormat> createDecodingFormat(
+DynamicTableFactory.Context context, ReadableConfig formatOptions) 
{
+FactoryUtil.validateFactoryOptions(this, formatOptions);
+return new PbDecodingFormat(buildConfig(formatOptions));
+}
+
+@Override
+public EncodingFormat> createEncodingFormat(
+DynamicTableFactory.Context context, ReadableConfig formatOptions) 
{
+FactoryUtil.validateFactoryOptions(this, formatOptions);
+return new PbEncodingFormat(buildConfig(formatOptions));
+}
+
+private static PbFormatConfig buildConfig(ReadableConfig formatOptions) {
+PbFormatConfig.PbFormatConfigBuilder configBuilder =
+new PbFormatConfig.PbFormatConfigBuilder();
+
configBuilder.messageClassName(formatOptions.get(PbFormatOptions.MESSAGE_CLASS_NAME));
+formatOptions
+.getOptional(PbFormatOptions.IGNORE_PARSE_ERRORS)
+.ifPresent(configBuilder::ignoreParseErrors);
+formatOptions
+.getOptional(PbFormatOptions.READ_DEFAULT_VALUES)
+.ifPresent(configBuilder::readDefaultValues);
+formatOptions
+.getOptional(PbFormatOptions.WRITE_NULL_STRING_LITERAL)
+.ifPresent(configBuilder::writeNullStringLiterals);
+return configBuilder.build();
+}
+
+@Override
+public String factoryIdentifier() {
+return IDENTIFIER;
+}
+
+@Override
+public Set> requiredOptions() {
+Set> result = new HashSet<>();
+result.add(PbFormatOptions.MESSAGE_CLASS_NAME);
+return result;
+}
+
+@Override
+public Set> optionalOptions() {

Review Comment:
   fixed and the test code is in ProtobufSQLITCaseTest#testSinkWithNullLiteral



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp commented on a diff in pull request #4910: [FLINK-7784] [kafka-producer] Don't fail TwoPhaseCommitSinkFunction when failing to commit during job recovery

2022-07-15 Thread GitBox


XComp commented on code in PR #4910:
URL: https://github.com/apache/flink/pull/4910#discussion_r97177


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java:
##
@@ -312,20 +347,104 @@ public void 
initializeState(FunctionInitializationContext context) throws Except
}
this.pendingCommitTransactions.clear();
 
-   currentTransaction = beginTransaction();
-   LOG.debug("{} - started new transaction '{}'", name(), 
currentTransaction);
+   currentTransactionHolder = beginTransactionInternal();
+   LOG.debug("{} - started new transaction '{}'", name(), 
currentTransactionHolder);
+   }
+
+   /**
+* This method must be the only place to call {@link 
#beginTransaction()} to ensure that the
+* {@link TransactionHolder} is created at the same time.
+*/
+   private TransactionHolder beginTransactionInternal() throws 
Exception {
+   return new TransactionHolder<>(beginTransaction(), 
clock.millis());
+   }
+
+   /**
+* This method must be the only place to call {@link 
#recoverAndCommit(Object)} to ensure that
+* the configuration parameters {@link #transactionTimeout} and
+* {@link #ignoreFailuresAfterTransactionTimeout} are respected.
+*/
+   private void recoverAndCommitInternal(TransactionHolder 
transactionHolder) {
+   try {
+   logWarningIfTimeoutAlmostReached(transactionHolder);
+   recoverAndCommit(transactionHolder.handle);
+   } catch (final Exception e) {

Review Comment:
   Cool thanks for clarification. That makes sense. And thanks for the pointer 
to the sneaky throw example.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-24229) [FLIP-171] DynamoDB implementation of Async Sink

2022-07-15 Thread Danny Cranmer (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567247#comment-17567247
 ] 

Danny Cranmer commented on FLINK-24229:
---

[~prasaanth07] unfortunately I cannot commit to anything yet, the connector 
still needs to be approved by the Flink community. 

> [FLIP-171] DynamoDB implementation of Async Sink
> 
>
> Key: FLINK-24229
> URL: https://issues.apache.org/jira/browse/FLINK-24229
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Zichen Liu
>Assignee: Yuri Gusev
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.16.0
>
>
> h2. Motivation
> *User stories:*
>  As a Flink user, I’d like to use DynamoDB as sink for my data pipeline.
> *Scope:*
>  * Implement an asynchronous sink for DynamoDB by inheriting the 
> AsyncSinkBase class. The implementation can for now reside in its own module 
> in flink-connectors.
>  * Implement an asynchornous sink writer for DynamoDB by extending the 
> AsyncSinkWriter. The implementation must deal with failed requests and retry 
> them using the {{requeueFailedRequestEntry}} method. If possible, the 
> implementation should batch multiple requests (PutRecordsRequestEntry 
> objects) to Firehose for increased throughput. The implemented Sink Writer 
> will be used by the Sink class that will be created as part of this story.
>  * Java / code-level docs.
>  * End to end testing: add tests that hits a real AWS instance. (How to best 
> donate resources to the Flink project to allow this to happen?)
> h2. References
> More details to be found 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-171%3A+Async+Sink]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] RyanSkraba commented on pull request #20267: [FLINK-28542][tests][JUnit5 Migration] FileSystemBehaviorTestSuite

2022-07-15 Thread GitBox


RyanSkraba commented on PR #20267:
URL: https://github.com/apache/flink/pull/20267#issuecomment-1185538176

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20001:
URL: https://github.com/apache/flink/pull/20001#discussion_r922154703


##
flink-core/src/main/java/org/apache/flink/util/MutableURLClassLoader.java:
##
@@ -31,10 +35,38 @@ public abstract class MutableURLClassLoader extends 
URLClassLoader {
 ClassLoader.registerAsParallelCapable();
 }
 
+/**
+ * Creates a new instance of MutableURLClassLoader subclass for the 
specified URLs, parent class
+ * loader and configuration.
+ */
+public static MutableURLClassLoader newInstance(
+final URL[] urls, final ClassLoader parent, final Configuration 
configuration) {
+final String[] alwaysParentFirstLoaderPatterns =
+CoreOptions.getParentFirstLoaderPatterns(configuration);
+final String classLoaderResolveOrder =
+configuration.getString(CoreOptions.CLASSLOADER_RESOLVE_ORDER);
+final FlinkUserCodeClassLoaders.ResolveOrder resolveOrder =
+
FlinkUserCodeClassLoaders.ResolveOrder.fromString(classLoaderResolveOrder);
+final boolean checkClassloaderLeak =
+configuration.getBoolean(CoreOptions.CHECK_LEAKED_CLASSLOADER);
+return FlinkUserCodeClassLoaders.create(
+resolveOrder,
+urls,
+parent,
+alwaysParentFirstLoaderPatterns,
+NOOP_EXCEPTION_HANDLER,
+checkClassloaderLeak);
+}
+
 public MutableURLClassLoader(URL[] urls, ClassLoader parent) {
 super(urls, parent);
 }
 
+@Override
+protected Class loadClass(String name, boolean resolve) throws 
ClassNotFoundException {
+return super.loadClass(name, resolve);
+}
+

Review Comment:
   But here it is not the child class call the parent class method, but call 
the method of its member classloader



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20001:
URL: https://github.com/apache/flink/pull/20001#discussion_r922152023


##
flink-table/flink-table-api-java/src/test/java/org/apache/flink/table/resource/ResourceManagerTest.java:
##
@@ -83,9 +82,49 @@ public void testRegisterResource() throws Exception {
 ClassNotFoundException.class,
 () -> Class.forName(LOWER_UDF_CLASS, false, userClassLoader));
 
+ResourceUri resourceUri = new ResourceUri(ResourceType.JAR, 
udfJar.getPath());
 // register the same jar repeatedly
-resourceManager.registerResource(new ResourceUri(ResourceType.JAR, 
udfJar.getPath()));
-resourceManager.registerResource(new ResourceUri(ResourceType.JAR, 
udfJar.getPath()));
+resourceManager.registerJarResource(Arrays.asList(resourceUri, 
resourceUri));
+
+// assert resource infos
+Map expected =
+Collections.singletonMap(
+resourceUri, resourceManager.getURLFromPath(new 
Path(udfJar.getPath(;
+
+assertEquals(expected, resourceManager.getResources());
+
+// test load class
+final Class clazz1 = Class.forName(LOWER_UDF_CLASS, false, 
userClassLoader);
+final Class clazz2 = Class.forName(LOWER_UDF_CLASS, false, 
userClassLoader);

Review Comment:
   Follow the test in `FlinkUserCodeClassLoadersTest`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20001:
URL: https://github.com/apache/flink/pull/20001#discussion_r922151040


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/resource/ClientResourceManager.java:
##
@@ -0,0 +1,49 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.table.client.resource;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.table.resource.ResourceManager;
+import org.apache.flink.table.resource.ResourceType;
+import org.apache.flink.table.resource.ResourceUri;
+import org.apache.flink.util.MutableURLClassLoader;
+
+import java.net.URL;
+
+/**
+ * This is only used by SqlClient, which expose {@code removeURL} method to 
support {@code REMOVE
+ * JAR} clause.
+ */
+@Internal
+public class ClientResourceManager extends ResourceManager {
+
+public ClientResourceManager(Configuration config, MutableURLClassLoader 
userClassLoader) {
+super(config, userClassLoader);
+}
+
+/**
+ * The method is only used to SqlClient for supporting remove jar syntax. 
SqlClient must
+ * guarantee also remove the jar from userClassLoader because it is {@code
+ * ClientMutableURLClassLoader}.
+ */
+public URL unregisterJarResource(String jarPath) {
+return resourceInfos.remove(new ResourceUri(ResourceType.JAR, 
jarPath));

Review Comment:
   We have qualified it in `SessionContext#removeJar` method, so here no need 
it. BTW, we should check it in here after 
[FLINK-27790](https://issues.apache.org/jira/browse/FLINK-27790) which we can 
support to remove remote jar.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28572) FlinkSQL executes Table.execute multiple times on the same Table, and changing the Table.execute code position will produce different phenomena

2022-07-15 Thread Andrew Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Chan updated FLINK-28572:

Description: 
*The following code prints and inserts fine*

 
{code:java}
public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");
Table result = tEnv.sqlQuery("select * from sensor");
tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
result.executeInsert("s_out");
result.execute().print();
}
{code}
 

—

*When the code that prints this line is moved up, it can be printed normally, 
but the insert statement is invalid, as follows*

 
{code:java}
public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");
Table result = tEnv.sqlQuery("select * from sensor");
tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
result.execute().print();
result.executeInsert("s_out");
}
{code}
 

  was:
*The following code prints and inserts fine*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");
Table result = tEnv.sqlQuery("select * from sensor");

tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
{color:#FF}result.executeInsert("s_out");{color}
{color:#FF} result.execute().print();{color}

}

---

*When the code that prints this line is moved up, it can be printed normally, 
but the insert statement is invalid, as follows*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);

StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");

Table result = tEnv.sqlQuery("select * from sensor");

tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
{color:#ff}result.execute().print();{color}
{color:#ff} result.executeInsert("s_out");{color}
}


> FlinkSQL executes Table.execute multiple times on the same Table, and 
> changing the Table.execute code position will produce different phenomena
> 

[GitHub] [flink] lsyldliu commented on a diff in pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20001:
URL: https://github.com/apache/flink/pull/20001#discussion_r922148716


##
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/util/ClientMutableURLClassLoaderTest.java:
##
@@ -0,0 +1,145 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.table.client.util;
+
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.testutils.CommonTestUtils;
+import org.apache.flink.util.ClientMutableURLClassLoader;
+import org.apache.flink.util.MutableURLClassLoader;
+import org.apache.flink.util.UserClassLoaderJarTestUtils;
+
+import org.junit.jupiter.api.BeforeAll;
+import org.junit.jupiter.api.Test;
+import org.junit.jupiter.api.io.TempDir;
+
+import java.io.File;
+import java.net.URL;
+import java.util.HashMap;
+import java.util.Map;
+
+import static 
org.apache.flink.table.client.gateway.utils.UserDefinedFunctions.GENERATED_LOWER_UDF_CLASS;
+import static 
org.apache.flink.table.client.gateway.utils.UserDefinedFunctions.GENERATED_LOWER_UDF_CODE;
+import static 
org.apache.flink.table.client.gateway.utils.UserDefinedFunctions.GENERATED_UPPER_UDF_CLASS;
+import static 
org.apache.flink.table.client.gateway.utils.UserDefinedFunctions.GENERATED_UPPER_UDF_CODE;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+/** Tests for classloading and class loader utilities. */
+public class ClientMutableURLClassLoaderTest {
+
+@TempDir private static File tempDir;
+
+private static File userJar;
+
+@BeforeAll
+public static void prepare() throws Exception {
+Map classNameCodes = new HashMap<>();
+classNameCodes.put(GENERATED_LOWER_UDF_CLASS, 
GENERATED_LOWER_UDF_CODE);
+classNameCodes.put(GENERATED_UPPER_UDF_CLASS, 
GENERATED_UPPER_UDF_CODE);
+userJar =
+UserClassLoaderJarTestUtils.createJarFile(
+tempDir, "test-classloader.jar", classNameCodes);
+}
+
+@Test
+public void testClassLoadingByAddURL() throws Exception {
+Configuration configuration = new Configuration();
+final ClientMutableURLClassLoader classLoader =
+new ClientMutableURLClassLoader(
+configuration,
+MutableURLClassLoader.newInstance(
+new URL[0], getClass().getClassLoader(), 
configuration));
+
+// test class loader before add jar url to ClassLoader
+assertClassNotFoundException(GENERATED_LOWER_UDF_CLASS, false, 
classLoader);
+
+// add jar url to ClassLoader
+classLoader.addURL(userJar.toURI().toURL());
+
+assertTrue(classLoader.getURLs().length == 1);
+
+final Class clazz1 = Class.forName(GENERATED_LOWER_UDF_CLASS, 
false, classLoader);
+final Class clazz2 = Class.forName(GENERATED_LOWER_UDF_CLASS, 
false, classLoader);
+
+assertEquals(clazz1, clazz2);
+
+classLoader.close();
+}
+
+@Test
+public void testClassLoadingByRemoveURL() throws Exception {
+URL jarURL = userJar.toURI().toURL();
+Configuration configuration = new Configuration();
+final ClientMutableURLClassLoader classLoader =
+new ClientMutableURLClassLoader(
+configuration,
+MutableURLClassLoader.newInstance(
+new URL[] {jarURL}, 
getClass().getClassLoader(), configuration));
+
+final Class clazz1 = Class.forName(GENERATED_LOWER_UDF_CLASS, 
false, classLoader);
+final Class clazz2 = Class.forName(GENERATED_LOWER_UDF_CLASS, 
false, classLoader);
+assertEquals(clazz1, clazz2);
+
+// remove jar url
+classLoader.removeURL(jarURL);
+
+assertTrue(classLoader.getURLs().length == 0);
+
+// test class loader after remove jar url from ClassLoader
+assertClassNotFoundException(GENERATED_UPPER_UDF_CLASS, false, 
classLoader);
+
+// add jar url to ClassLoader again
+classLoader.addURL(jarURL);
+
+assertTrue(classLoader.getURLs().length == 1);
+
+final Class clazz3 

[GitHub] [flink] lsyldliu commented on a diff in pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20001:
URL: https://github.com/apache/flink/pull/20001#discussion_r922148341


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/context/SessionContext.java:
##
@@ -235,69 +231,58 @@ public static SessionContext create(DefaultContext 
defaultContext, String sessio
 settings.getBuiltInDatabaseName()))
 .build();
 
-FunctionCatalog functionCatalog =
-new FunctionCatalog(configuration, catalogManager, 
moduleManager, classLoader);
-SessionState sessionState =
-new SessionState(catalogManager, moduleManager, 
functionCatalog);
+final FunctionCatalog functionCatalog =
+new FunctionCatalog(configuration, resourceManager, 
catalogManager, moduleManager);
+final SessionState sessionState =
+new SessionState(catalogManager, moduleManager, 
resourceManager, functionCatalog);
 
 // 
--
 // Init ExecutionContext
 // 
--
 
 ExecutionContext executionContext =
-new ExecutionContext(configuration, classLoader, sessionState);
+new ExecutionContext(configuration, userClassLoader, 
sessionState);
 
 return new SessionContext(
 defaultContext,
 sessionId,
 configuration,
-classLoader,
+userClassLoader,
 sessionState,
 executionContext);
 }
 
 public void addJar(String jarPath) {
-URL jarURL = getURLFromPath(jarPath, "SQL Client only supports to add 
local jars.");
-if (dependencies.contains(jarURL)) {
-return;
+checkJarPath(jarPath, "SQL Client only supports to add local jars.");
+try {
+sessionState.resourceManager.registerJarResource(
+Collections.singletonList(new 
ResourceUri(ResourceType.JAR, jarPath)));
+} catch (IOException e) {
+LOG.warn(String.format("Could not register the specified jar 
[%s].", jarPath), e);
 }
-
-// merge the jars in config with the jars maintained in session
-Set jarsInConfig = getJarsInConfig();
-
-Set newDependencies = new HashSet<>(dependencies);
-newDependencies.addAll(jarsInConfig);
-newDependencies.add(jarURL);
-updateClassLoaderAndDependencies(newDependencies);
-
-// renew the execution context
-executionContext = new ExecutionContext(sessionConfiguration, 
classLoader, sessionState);
 }
 
 public void removeJar(String jarPath) {
-URL jarURL = getURLFromPath(jarPath, "SQL Client only supports to 
remove local jars.");
-if (!dependencies.contains(jarURL)) {
+// if is relative path, convert to absolute path
+URL jarURL = checkJarPath(jarPath, "SQL Client only supports to remove 
local jars.");
+// remove jar from resource manager
+jarURL = 
sessionState.resourceManager.unregisterJarResource(jarURL.getPath());
+if (jarURL == null) {
 LOG.warn(
 String.format(
-"Could not remove the specified jar because the 
jar path(%s) is not found in session classloader.",
+"Could not remove the specified jar because the 
jar path [%s] hadn't registered to classloader.",
 jarPath));
 return;
 }
-
-Set newDependencies = new HashSet<>(dependencies);
-// merge the jars in config with the jars maintained in session
-Set jarsInConfig = getJarsInConfig();
-newDependencies.addAll(jarsInConfig);
-newDependencies.remove(jarURL);
-
-updateClassLoaderAndDependencies(newDependencies);
-
-// renew the execution context
-executionContext = new ExecutionContext(sessionConfiguration, 
classLoader, sessionState);
+// remove jar from classloader
+classLoader.removeURL(jarURL);
 }
 
 public List listJars() {
-return 
dependencies.stream().map(URL::getPath).collect(Collectors.toList());
+return sessionState.resourceManager.getResources().keySet().stream()

Review Comment:
   I think here should return the user registered uri instead of local url.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20001:
URL: https://github.com/apache/flink/pull/20001#discussion_r922147468


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/util/ClientMutableURLClassLoader.java:
##
@@ -0,0 +1,140 @@
+/*
+ *  Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.flink.util;
+
+import org.apache.flink.annotation.Experimental;
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.configuration.Configuration;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Set;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+/**
+ * This class loader extends {@link MutableURLClassLoader}, upon the {@code 
addURL} method, it also
+ * exposes a {@code removeURL} method which used to remove unnecessary jar 
from current classloader
+ * path. This class loader wraps a {@link MutableURLClassLoader} and an old 
classloader list, the
+ * class load is delegated to the inner {@link MutableURLClassLoader}.
+ *
+ * This is only used to SqlClient for supporting {@code REMOVE JAR} clause 
currently. When remove
+ * a jar, get the registered jar url list from current {@link 
MutableURLClassLoader} firstly, then
+ * create a new instance of {@link MutableURLClassLoader} which urls doesn't 
include the removed
+ * jar, and the currentClassLoader point to new instance object, the old 
object is added to list to
+ * be closed when close {@link ClientMutableURLClassLoader}.
+ *
+ * Note: This classloader is not guaranteed to actually remove class or 
resource, any classes or
+ * resources in the removed jar that are already loaded, are still accessible.
+ */
+@Experimental
+@Internal
+public class ClientMutableURLClassLoader extends MutableURLClassLoader {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(ClientMutableURLClassLoader.class);
+
+static {
+ClassLoader.registerAsParallelCapable();
+}
+
+private final Configuration configuration;
+private final List oldClassLoaders = new 
ArrayList<>();
+private MutableURLClassLoader currentClassLoader;
+
+public ClientMutableURLClassLoader(
+Configuration configuration, MutableURLClassLoader 
mutableURLClassLoader) {
+super(new URL[0], mutableURLClassLoader);
+this.configuration = configuration;
+this.currentClassLoader = mutableURLClassLoader;
+}
+
+@Override
+protected final Class loadClass(String name, boolean resolve) throws 
ClassNotFoundException {
+return currentClassLoader.loadClass(name, resolve);
+}
+
+@Override
+public void addURL(URL url) {
+currentClassLoader.addURL(url);
+}
+
+public void removeURL(URL url) {
+Set registeredUrls =
+
Stream.of(currentClassLoader.getURLs()).collect(Collectors.toSet());
+if (!registeredUrls.contains(url)) {
+LOG.warn(
+String.format(
+"Could not remove the specified jar because the 
jar path [%s] is not found in classloader.",
+url));
+return;
+}
+
+// add current classloader to list
+oldClassLoaders.add(currentClassLoader);
+// remove url from registeredUrls
+registeredUrls.remove(url);
+// update current classloader point to a new MutableURLClassLoader 
instance
+currentClassLoader =
+MutableURLClassLoader.newInstance(
+registeredUrls.toArray(new URL[0]),
+currentClassLoader.getParent(),
+configuration);
+}
+
+@Override
+public URL[] getURLs() {
+return currentClassLoader.getURLs();
+}
+
+@Override
+public void close() throws IOException {
+IOException exception = null;
+try {
+// close current classloader
+currentClassLoader.close();
+} catch (IOException e) {
+LOG.debug("Error while closing class loader in 
ClientMutableURLClassLoader.", e);
+   

[GitHub] [flink] lsyldliu commented on a diff in pull request #20001: [FLINK-27659][table] Planner support to use jar which is registered by 'CREATE FUNTION USING JAR' syntax

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20001:
URL: https://github.com/apache/flink/pull/20001#discussion_r922147066


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/CatalogManager.java:
##
@@ -949,4 +954,8 @@ public ResolvedCatalogView resolveCatalogView(CatalogView 
view) {
 final ResolvedSchema resolvedSchema = 
view.getUnresolvedSchema().resolve(schemaResolver);
 return new ResolvedCatalogView(view, resolvedSchema);
 }
+
+public ClassLoader getUserClassLoader() {
+return userClassLoader;
+}

Review Comment:
   For hive dialect, we need it. Here removed it first, we can expose it in 
next pr.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28572) FlinkSQL executes Table.execute multiple times on the same Table, and changing the Table.execute code position will produce different phenomena

2022-07-15 Thread Andrew Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Chan updated FLINK-28572:

Description: 
*The following code prints and inserts fine*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");
Table result = tEnv.sqlQuery("select * from sensor");

tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
{color:#FF}result.executeInsert("s_out");{color}
{color:#FF} result.execute().print();{color}

}

---

*When the code that prints this line is moved up, it can be printed normally, 
but the insert statement is invalid, as follows*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);

StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");

Table result = tEnv.sqlQuery("select * from sensor");

tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
{color:#ff}result.execute().print();{color}
{color:#ff} result.executeInsert("s_out");{color}
}

  was:
*The following code prints and inserts fine*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");
Table result = tEnv.sqlQuery("select * from sensor");
tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
result.executeInsert("s_out");
result.execute().print();
}

 

*When the code that prints this line is moved up, it can be printed normally, 
but the insert statement is invalid, as follows*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);

StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
// 1. 通过ddl方式建表(动态表), 与文件关联
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");


Table result = tEnv.sqlQuery("select * from sensor");

tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
{color:#FF}result.execute().print();{color}
{color:#FF} result.executeInsert("s_out");{color}
}


> FlinkSQL executes Table.execute multiple times on the same Table, and 
> changing the Table.execute code position will produce different phenomena
> 

[jira] [Created] (FLINK-28572) FlinkSQL executes Table.execute multiple times on the same Table, and changing the Table.execute code position will produce different phenomena

2022-07-15 Thread Andrew Chan (Jira)
Andrew Chan created FLINK-28572:
---

 Summary: FlinkSQL executes Table.execute multiple times on the 
same Table, and changing the Table.execute code position will produce different 
phenomena
 Key: FLINK-28572
 URL: https://issues.apache.org/jira/browse/FLINK-28572
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.13.6
 Environment: flink-table-planner-blink_2.11  

1.13.6
Reporter: Andrew Chan


*The following code prints and inserts fine*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);
StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");
Table result = tEnv.sqlQuery("select * from sensor");
tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
result.executeInsert("s_out");
result.execute().print();
}

 

*When the code that prints this line is moved up, it can be printed normally, 
but the insert statement is invalid, as follows*

public static void main(String[] args) {
Configuration conf = new Configuration();
conf.setInteger("rest.port", 2000);
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment(conf);
env.setParallelism(1);

StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
// 1. 通过ddl方式建表(动态表), 与文件关联
tEnv.executeSql("create table sensor(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's1', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'properties.group.id' = 'atguigu', " +
" 'scan.startup.mode' = 'latest-offset', " +
" 'format' = 'csv'" +
")");


Table result = tEnv.sqlQuery("select * from sensor");

tEnv.executeSql("create table s_out(" +
" id string, " +
" ts bigint, " +
" vc int" +
")with(" +
" 'connector' = 'kafka', " +
" 'topic' = 's2', " +
" 'properties.bootstrap.servers' = 'hadoop162:9092', " +
" 'format' = 'json', " +
" 'sink.partitioner' = 'round-robin' " +
")");
{color:#FF}result.execute().print();{color}
{color:#FF} result.executeInsert("s_out");{color}
}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-25421) Port JDBC Sink to new Unified Sink API (FLIP-143)

2022-07-15 Thread liwei li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17567225#comment-17567225
 ] 

liwei li commented on FLINK-25421:
--

Oh, my God. I'm so sorry for the late reply. I was on sick leave during that 
time and I forgot about this. 

I see another PR is doing this and also noticed that [~RocMarshal]  was working 
on JDBC source . I wonder if there is any progress in this matter recently?

hi, [~eskabetxe] I have seen that you have done a lot of work in this. If 
[~martijnvisser]  thinks your PR is going in the right direction, I think you 
can continue. I hope I am not in your way.

> Port JDBC Sink to new Unified Sink API (FLIP-143)
> -
>
> Key: FLINK-25421
> URL: https://issues.apache.org/jira/browse/FLINK-25421
> Project: Flink
>  Issue Type: Improvement
>Reporter: Martijn Visser
>Priority: Major
>
> The current JDBC connector is using the old SinkFunction interface, which is 
> going to be deprecated. We should port/refactor the JDBC Sink to use the new 
> Unified Sink API, based on FLIP-143 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-143%3A+Unified+Sink+API



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-25256) Savepoints do not work with ExternallyInducedSources

2022-07-15 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-25256.
--
Fix Version/s: 1.15.0
   (was: 1.14.6)
   Resolution: Fixed

> Savepoints do not work with ExternallyInducedSources
> 
>
> Key: FLINK-25256
> URL: https://issues.apache.org/jira/browse/FLINK-25256
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0, 1.13.3, 1.12.7
>Reporter: Dawid Wysakowicz
>Assignee: Arvid Heise
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.16.0, 1.15.0
>
>
> It is not possible to take a proper savepoint with 
> {{ExternallyInducedSource}} or {{ExternallyInducedSourceReader}} (both legacy 
> and FLIP-27 versions). The problem is that we're hardcoding 
> {{CheckpointOptions}} in the {{triggerHook}}.
> The outcome of current state is that operators would try to take checkpoints 
> in the checkpoint location whereas the {{CheckpointCoordinator}} will write 
> metadata for those states in the savepoint location.
> Moreover the situation gets even weirder (I have not checked it entirely), if 
> we have a mixture of {{ExternallyInducedSource(s)}} and regular sources. In 
> such a case the location and format at which the state of a particular task 
> is persisted depends on the order of barriers arrival. If a barrier from a 
> regular source arrives last the task takes a savepoint, on the other hand if 
> last barrier is from an externally induced source it will take a checkpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] pnowojski closed pull request #19273: [FLINK-25256][streaming] Externally induced sources replay barriers received over RPC instead of inventing them out of thin air. [1.14]

2022-07-15 Thread GitBox


pnowojski closed pull request #19273: [FLINK-25256][streaming] Externally 
induced sources replay barriers received over RPC instead of inventing them out 
of thin air. [1.14]
URL: https://github.com/apache/flink/pull/19273


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] pnowojski commented on pull request #19273: [FLINK-25256][streaming] Externally induced sources replay barriers received over RPC instead of inventing them out of thin air. [1.14]

2022-07-15 Thread GitBox


pnowojski commented on PR #19273:
URL: https://github.com/apache/flink/pull/19273#issuecomment-1185498343

   PR has been abandoned 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-28571) Add Chi-squared test as Transformer to ml.feature

2022-07-15 Thread Simon Tao (Jira)
Simon Tao created FLINK-28571:
-

 Summary: Add Chi-squared test as Transformer to ml.feature
 Key: FLINK-28571
 URL: https://issues.apache.org/jira/browse/FLINK-28571
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: Simon Tao


Pearson's chi-squared 
test:https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test

For more information on 
chi-squared:http://en.wikipedia.org/wiki/Chi-squared_test



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] MartijnVisser merged pull request #20226: [hotfix][docs-zh]Fix Chinese document format errors.

2022-07-15 Thread GitBox


MartijnVisser merged PR #20226:
URL: https://github.com/apache/flink/pull/20226


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] sunshineJK commented on pull request #20127: [FLINK-26270] Flink SQL write data to kafka by CSV format , whether d…

2022-07-15 Thread GitBox


sunshineJK commented on PR #20127:
URL: https://github.com/apache/flink/pull/20127#issuecomment-1185475290

   hi @afedulov Please review it again


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser merged pull request #20253: [hotfix][docs-zh] Add missing the working_directory.md file to the standalone part.

2022-07-15 Thread GitBox


MartijnVisser merged PR #20253:
URL: https://github.com/apache/flink/pull/20253


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] GJL commented on a diff in pull request #4910: [FLINK-7784] [kafka-producer] Don't fail TwoPhaseCommitSinkFunction when failing to commit during job recovery

2022-07-15 Thread GitBox


GJL commented on code in PR #4910:
URL: https://github.com/apache/flink/pull/4910#discussion_r922073308


##
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/TwoPhaseCommitSinkFunction.java:
##
@@ -312,20 +347,104 @@ public void 
initializeState(FunctionInitializationContext context) throws Except
}
this.pendingCommitTransactions.clear();
 
-   currentTransaction = beginTransaction();
-   LOG.debug("{} - started new transaction '{}'", name(), 
currentTransaction);
+   currentTransactionHolder = beginTransactionInternal();
+   LOG.debug("{} - started new transaction '{}'", name(), 
currentTransactionHolder);
+   }
+
+   /**
+* This method must be the only place to call {@link 
#beginTransaction()} to ensure that the
+* {@link TransactionHolder} is created at the same time.
+*/
+   private TransactionHolder beginTransactionInternal() throws 
Exception {
+   return new TransactionHolder<>(beginTransaction(), 
clock.millis());
+   }
+
+   /**
+* This method must be the only place to call {@link 
#recoverAndCommit(Object)} to ensure that
+* the configuration parameters {@link #transactionTimeout} and
+* {@link #ignoreFailuresAfterTransactionTimeout} are respected.
+*/
+   private void recoverAndCommitInternal(TransactionHolder 
transactionHolder) {
+   try {
+   logWarningIfTimeoutAlmostReached(transactionHolder);
+   recoverAndCommit(transactionHolder.handle);
+   } catch (final Exception e) {

Review Comment:
   I think changing `Exception` to `RuntimeException` won't be semantically 
equivalent due to the possibility of [sneaky 
throws](https://www.baeldung.com/java-sneaky-throws#sneaky). My memory about 
this code is a bit fuzzy but I assume there can be sneaky throws from within 
`recoverAndCommit` because the `commit` callback is public API. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] lsyldliu commented on a diff in pull request #20242: [FLINK-28489][sql-parser] Introduce `ANALYZE TABLE` syntax in sql parser

2022-07-15 Thread GitBox


lsyldliu commented on code in PR #20242:
URL: https://github.com/apache/flink/pull/20242#discussion_r922068090


##
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl:
##
@@ -2203,3 +2203,81 @@ SqlNode TryCastFunctionCall() :
 return operator.createCall(s.end(this), args);
 }
 }
+
+/**
+* Parses a partition key/value,
+* e.g. p or p = '10'.
+*/
+SqlPair PartitionKeyValuePair():
+{
+SqlIdentifier key;
+SqlNode value = null;
+SqlParserPos pos;
+}
+{
+key = SimpleIdentifier() { pos = getPos(); }
+[
+LOOKAHEAD(1)
+ value = Literal()
+]
+{
+   return new SqlPair(key, value, pos);
+}
+}
+
+/**
+* Parses a partition specifications statement,
+* e.g. ANALYZE TABLE tbl1 partition(col1='val1', col2='val2') xxx
+* or
+* ANALYZE TABLE tbl1 partition(col1, col2) xxx.
+*/
+void ExtendedPartitionSpecCommaList(SqlNodeList list) :
+{
+SqlPair keyValuePair;
+}
+{
+
+keyValuePair = PartitionKeyValuePair()
+{
+   list.add(keyValuePair);
+}
+(
+ keyValuePair = PartitionKeyValuePair()
+{
+list.add(keyValuePair);
+}
+)*
+
+}
+
+/** Parses an ANALYZE TABLE statement. */
+SqlNode SqlAnalyzeTable():
+{
+   Span s;
+   SqlIdentifier tableName;
+   SqlNodeList partitionSpec = null;
+   SqlNodeList columns = null;
+   boolean allColumns = false;
+}
+{
+  { s = span(); }
+tableName = CompoundIdentifier()
+[
+ {
+partitionSpec = new SqlNodeList(getPos());
+ExtendedPartitionSpecCommaList(partitionSpec);
+}
+]
+
+  [
+(

Review Comment:
   I have tested it, it can.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-28567) Introduce predicate inference from one side of join to the other

2022-07-15 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-28567:
--

Assignee: Alexander Trushev

> Introduce predicate inference from one side of join to the other
> 
>
> Key: FLINK-28567
> URL: https://issues.apache.org/jira/browse/FLINK-28567
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Alexander Trushev
>Assignee: Alexander Trushev
>Priority: Major
>  Labels: pull-request-available
>
> h2. Context
> There is JoinPushTransitivePredicatesRule in Calcite that infers predicates 
> from on a Join and creates Filters if those predicates can be pushed to its 
> inputs.
> *Example.* (a0 = b0 AND a0 > 0) => (b0 > 0)
> h2. Proposal
> Add org.apache.calcite.rel.rules.JoinPushTransitivePredicatesRule to 
> FlinkStreamRuleSets and to FlinkBatchRuleSets
> h2. Benefit
> Before the changes:
> {code}
> Flink SQL> explain select * from A join B on a0 = b0 and a0 > 0;
> Join(joinType=[InnerJoin], where=[=(a0, b0)], select=[a0, b0], 
> leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])
> :- Exchange(distribution=[hash[a0]])
> :  +- Calc(select=[a0], where=[>(a0, 0)])
> : +- TableSourceScan(table=[[default_catalog, default_database, A, 
> filter=[]]], fields=[a0])
> +- Exchange(distribution=[hash[b0]])
>+- TableSourceScan(table=[[default_catalog, default_database, B]], 
> fields=[b0])
> {code}
> After the changes:
> {code}
> Join(joinType=[InnerJoin], where=[=(a0, b0)], select=[a0, b0], 
> leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])
> :- Exchange(distribution=[hash[a0]])
> :  +- Calc(select=[a0], where=[>(a0, 0)])
> : +- TableSourceScan(table=[[default_catalog, default_database, A, 
> filter=[]]], fields=[a0])
> +- Exchange(distribution=[hash[b0]])
>+- Calc(select=[b0], where=[>(b0, 0)])
>   +- TableSourceScan(table=[[default_catalog, default_database, B, 
> filter=[]]], fields=[b0])
> {code}
> i.e., b0 > 0 is inferred and pushed down



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-16165) NestedLoopJoin fallback to HashJoin when build records number is very large

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16165:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
auto-deprioritized-minor  (was: auto-deprioritized-critical 
auto-deprioritized-major auto-deprioritized-minor stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> NestedLoopJoin fallback to HashJoin when build records number is very large
> ---
>
> Key: FLINK-16165
> URL: https://issues.apache.org/jira/browse/FLINK-16165
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> auto-deprioritized-minor
>
> Now, if the statistic is not so accurate, maybe choose wrong join type to 
> nested loop join.
> If build records number is very large, this lead to very slow join, the user 
> looks like the join is in Hang.
> It is a stability problem, We should fallback to hash join in runtime to 
> avoid this hang.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-15959) Add min number of slots configuration to limit total number of slots

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15959:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add min number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-20714) Hive delegation token is not obtained when using `kinit` to submit Yarn per-job

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20714:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor keberos  (was: 
auto-deprioritized-major keberos stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Hive delegation token is not obtained when using `kinit` to submit Yarn 
> per-job 
> 
>
> Key: FLINK-20714
> URL: https://issues.apache.org/jira/browse/FLINK-20714
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.11.2, 1.11.3, 1.12.0
> Environment: Flink 1.11.2 on Yarn
>Reporter: jackwangcs
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> keberos
>
> Hive delegation token is not obtained when using `kinit` to submit Yarn 
> per-job. 
> In YarnClusterDescriptor, it calls org.apache.flink.yarn.Utils#setTokensFor 
> to obtain tokens for the job. But setTokensFor only obtains HDFS and HBase 
> tokens currently, since the Hive integration is supported, the Hive 
> delegation should be obtained also. 
>  Otherwise, it will throw the following error when it tries to connect to 
> Hive metastore:
> {code:java}
> Caused by: MetaException(message:Could not connect to meta store using any of 
> the URIs provided. Most recent failure: 
> org.apache.thrift.transport.TTransportException: GSS initiate failedCaused 
> by: MetaException(message:Could not connect to meta store using any of the 
> URIs provided. Most recent failure: 
> org.apache.thrift.transport.TTransportException: GSS initiate failed at 
> org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
>  at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316) 
> at 
> org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
>  at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
>  at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>  at 
> org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:464)
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:244)
>  at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:187)
>  at 
> org.apache.flink.table.catalog.hive.client.HiveShimV100.getHiveMetastoreClient(HiveShimV100.java:97)
>  at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.createMetastoreClient(HiveMetastoreClientWrapper.java:240)
>  at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientWrapper.(HiveMetastoreClientWrapper.java:71)
>  at 
> org.apache.flink.table.catalog.hive.client.HiveMetastoreClientFactory.create(HiveMetastoreClientFactory.java:35)
>  at 
> org.apache.flink.connectors.hive.HiveTableMetaStoreFactory$HiveTableMetaStore.(HiveTableMetaStoreFactory.java:74)
>  at 
> org.apache.flink.connectors.hive.HiveTableMetaStoreFactory$HiveTableMetaStore.(HiveTableMetaStoreFactory.java:68)
>  at 
> org.apache.flink.connectors.hive.HiveTableMetaStoreFactory.createTableMetaStore(HiveTableMetaStoreFactory.java:65)
>  at 
> org.apache.flink.connectors.hive.HiveTableMetaStoreFactory.createTableMetaStore(HiveTableMetaStoreFactory.java:43)
>  at 
> org.apache.flink.table.filesystem.PartitionLoader.(PartitionLoader.java:61)
>  at 
> org.apache.flink.table.filesystem.FileSystemCommitter.commitUpToCheckpoint(FileSystemCommitter.java:97)
>  at 
> org.apache.flink.table.filesystem.FileSystemOutputFormat.finalizeGlobal(FileSystemOutputFormat.java:95)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22255) AdaptiveScheduler improvements/bugs

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22255:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> AdaptiveScheduler improvements/bugs
> ---
>
> Key: FLINK-22255
> URL: https://issues.apache.org/jira/browse/FLINK-22255
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> This ticket collects the improvements/bugs for the {{AdaptiveScheduler}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19957) flink-sql-connector-hive incompatible with cdh6

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19957:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
auto-deprioritized-minor  (was: auto-deprioritized-critical 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> flink-sql-connector-hive incompatible with cdh6
> ---
>
> Key: FLINK-19957
> URL: https://issues.apache.org/jira/browse/FLINK-19957
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.11.2
> Environment: Flink 1.11.2
> Hadoop 3.0.0-cdh6.3.1
> Hive 2.1.1-cdh6.3.1
>Reporter: Cheng Pan
>Priority: Not a Priority
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> auto-deprioritized-minor
>
> According to Flink docs, we should use flink-sql-connector-hive-2.2.0(which 
> should compatible with hive 2.0.0 - 2.2.0), actually, we got a exception: 
> Unrecognized Hadoop major version number: 3.0.0-cdh6.3.1;
> If use flink-sql-connector-hive-2.3.6 (which should compatible with 2.3.0 - 
> 2.3.6), encounter another exception: org.apache.thrift.TApplicationException: 
> Invalid method name: 'get_table_req'
> If copy flink-connector-hive_2.11-1.11.2.jar and hive-exec-2.1.1-cdh6.3.1.jar 
> to flink/lib, not work again. Caused by: java.lang.ClassNotFoundException: 
> com.facebook.fb303.FacebookService$Iface



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-13856) Reduce the delete file api when the checkpoint is completed

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13856:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Reduce the delete file api when the checkpoint is completed
> ---
>
> Key: FLINK-13856
> URL: https://issues.apache.org/jira/browse/FLINK-13856
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.8.1, 1.9.0
>Reporter: Andrew.D.lin
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Attachments: after.png, before.png, 
> f6cc56b7-2c74-4f4b-bb6a-476d28a22096.png
>
>   Original Estimate: 48h
>  Time Spent: 10m
>  Remaining Estimate: 47h 50m
>
> When the new checkpoint is completed, an old checkpoint will be deleted by 
> calling CompletedCheckpoint.discardOnSubsume().
> When deleting old checkpoints, follow these steps:
> 1, drop the metadata
> 2, discard private state objects
> 3, discard location as a whole
> In some cases, is it possible to delete the checkpoint folder recursively by 
> one call?
> As far as I know the full amount of checkpoint, it should be possible to 
> delete the folder directly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25217) FLIP-190: Support Version Upgrades for Table API & SQL Programs

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25217:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> FLIP-190: Support Version Upgrades for Table API & SQL Programs
> ---
>
> Key: FLINK-25217
> URL: https://issues.apache.org/jira/browse/FLINK-25217
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Timo Walther
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Nowadays, the Table & SQL API is as important to Flink as the DataStream API. 
> It is one of the main abstractions for expressing pipelines that perform 
> stateful stream processing. Users expect the same backwards compatibility 
> guarantees when upgrading to a newer Flink version as with the DataStream API.
> In particular, this means:
> * once the operator topology is defined, it remains static and does not 
> change between Flink versions, unless resulting in better performance,
> * business logic (defined using expressions and functions in queries) behaves 
> identical as before the version upgrade,
> * the state of a Table & SQL API program can be restored from a savepoint of 
> a previous version,
> * adding or removing stateful operators should be made possible in the 
> DataStream API.
> The same query can remain up and running after upgrades.
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=191336489



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-19628) Introduce multi-input operator for streaming

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19628:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Introduce multi-input operator for streaming
> 
>
> Key: FLINK-19628
> URL: https://issues.apache.org/jira/browse/FLINK-19628
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Caizhi Weng
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>
> After the planner is ready for multi-input, we should introduce multi-input 
> operator for streaming.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-21608) YARNHighAvailabilityITCase.testClusterClientRetrieval fails with "There is at least one application..."

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21608:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> YARNHighAvailabilityITCase.testClusterClientRetrieval fails with "There is at 
> least one application..."
> ---
>
> Key: FLINK-21608
> URL: https://issues.apache.org/jira/browse/FLINK-21608
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Tests
>Affects Versions: 1.13.0, 1.15.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14108=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2021-03-04T10:45:47.3523930Z INFO: Binding 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler to 
> GuiceManagedComponentProvider with the scope "Singleton"
> 2021-03-04T10:45:47.4240091Z Mar 04, 2021 10:45:47 AM 
> com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory 
> getComponentProvider
> 2021-03-04T10:45:47.4241009Z INFO: Binding 
> org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to 
> GuiceManagedComponentProvider with the scope "Singleton"
> 2021-03-04T10:47:53.6102867Z [ERROR] Tests run: 3, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 132.302 s <<< FAILURE! - in 
> org.apache.flink.yarn.YARNHighAvailabilityITCase
> 2021-03-04T10:47:53.6103745Z [ERROR] 
> testClusterClientRetrieval(org.apache.flink.yarn.YARNHighAvailabilityITCase)  
> Time elapsed: 15.906 s  <<< FAILURE!
> 2021-03-04T10:47:53.6104784Z java.lang.AssertionError: There is at least one 
> application on the cluster that is not finished.[App 
> application_1614854744820_0003 is in state RUNNING.]
> 2021-03-04T10:47:53.6106075Z  at org.junit.Assert.fail(Assert.java:88)
> 2021-03-04T10:47:53.6108977Z  at 
> org.apache.flink.yarn.YarnTestBase$CleanupYarnApplication.close(YarnTestBase.java:322)
> 2021-03-04T10:47:53.6109784Z  at 
> org.apache.flink.yarn.YarnTestBase.runTest(YarnTestBase.java:286)
> 2021-03-04T10:47:53.6110493Z  at 
> org.apache.flink.yarn.YARNHighAvailabilityITCase.testClusterClientRetrieval(YARNHighAvailabilityITCase.java:219)
> 2021-03-04T10:47:53.6111446Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-03-04T10:47:53.6111871Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-03-04T10:47:53.6112360Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-03-04T10:47:53.6112784Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-03-04T10:47:53.6113210Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-03-04T10:47:53.6114001Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-03-04T10:47:53.6114796Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-03-04T10:47:53.6115388Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-03-04T10:47:53.6116123Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-03-04T10:47:53.6116995Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2021-03-04T10:47:53.6117810Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-03-04T10:47:53.6118621Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-03-04T10:47:53.6119311Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-03-04T10:47:53.6119840Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-03-04T10:47:53.6120279Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-03-04T10:47:53.6120739Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-03-04T10:47:53.6121173Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-03-04T10:47:53.6121692Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-03-04T10:47:53.6122128Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-03-04T10:47:53.6122594Z  at 
> 

[jira] [Updated] (FLINK-24345) Translate "File Systems" page of "Internals" into Chinese

2022-07-15 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24345:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Translate "File Systems" page of "Internals" into Chinese
> -
>
> Key: FLINK-24345
> URL: https://issues.apache.org/jira/browse/FLINK-24345
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Reporter: Liebing Yu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> The page url is 
> https://nightlies.apache.org/flink/flink-docs-master/docs/internals/filesystems/
> The markdown file is located in 
> flink/docs/content/docs/internals/filesystems.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   >