[GitHub] [flink] flinkbot edited a comment on pull request #13812: [FLINK-19674][docs-zh] Translate "Docker" of "Clusters & Depolyment" page into Chinese

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13812:
URL: https://github.com/apache/flink/pull/13812#issuecomment-717687160


   
   ## CI report:
   
   * 4f66617c4540a4987a5852cf0168f7e3a552c91a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8457)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13763: [FLINK-19779][avro] Remove the record_ field name prefix for Confluen…

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13763:
URL: https://github.com/apache/flink/pull/13763#issuecomment-715195599


   
   ## CI report:
   
   * ba752274c9926115f65f5ecbef55b71b0b71cfa2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8440)
 
   * f004220668e20dcd9860026b69566868d473db33 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8455)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13760: [FLINK-19627][table-runtime] Introduce multiple input operator for batch

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13760:
URL: https://github.com/apache/flink/pull/13760#issuecomment-715078238


   
   ## CI report:
   
   * b1353f7422a706cd50502d62b988b06154d33ffc Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8439)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8407)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13814: [FLINK-19839][e2e] Properly forward test exit code to CI system

2020-10-28 Thread GitBox


flinkbot commented on pull request #13814:
URL: https://github.com/apache/flink/pull/13814#issuecomment-717724545


   
   ## CI report:
   
   * 4ff8c35475568fee2cd77ba6268e119ed805ec95 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13805: [FLINK-19569][table] Upgrade ICU4J to 67.1

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13805:
URL: https://github.com/apache/flink/pull/13805#issuecomment-717226449


   
   ## CI report:
   
   * 6ee7fc39e8a9c85c8b45c3c5a3b89de8f84416c7 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8398)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8414)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8441)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #13605: [FLINK-19599][table] Introduce Filesystem format factories to integrate new FileSource to table

2020-10-28 Thread GitBox


JingsongLi commented on pull request #13605:
URL: https://github.com/apache/flink/pull/13605#issuecomment-717723912


   Thanks a lot for your review! @wuchong 
   Try to wrap `SerializationSchema` and `DeserializationSchema` for csv and 
json, is a very interesting thing.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13300: [FLINK-19077][table-runtime] Import process time temporal join operator.

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13300:
URL: https://github.com/apache/flink/pull/13300#issuecomment-684957567


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 6bf5dc2e71b14086d5f908cc300060595adc46ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8452)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe merged pull request #13760: [FLINK-19627][table-runtime] Introduce multiple input operator for batch

2020-10-28 Thread GitBox


godfreyhe merged pull request #13760:
URL: https://github.com/apache/flink/pull/13760


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shuiqiangchen commented on pull request #13756: [FLINK-19770][python][test] Changed the PythonProgramOptionTest to be an ITCase.

2020-10-28 Thread GitBox


shuiqiangchen commented on pull request #13756:
URL: https://github.com/apache/flink/pull/13756#issuecomment-717730700


   @anonymouscodeholic Thank you for pointing out the mistake, I have revised 
them in the latest commit.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tzulitai commented on pull request #13773: [backport-1.11] [FLINK-19748] Iterating key groups in raw keyed stream on restore fails if some key groups weren't written

2020-10-28 Thread GitBox


tzulitai commented on pull request #13773:
URL: https://github.com/apache/flink/pull/13773#issuecomment-717730004


   @Antti-Kaikkonen thanks you for trying the branch out.
   
   I think the exceptions you encountered are expected in the experiments 
you've tried out.
   
   Can you adjust your experiments to do the following, and then report back 
again?:
   
   Try out your application `FlinkStatefunCountTo1M` with a new build of 
StateFun that includes the changes in 
https://github.com/apache/flink-statefun/pull/168?
   
   You should be able to just pull that branch, do a clean build (`mvn clean 
install -DskipTests`), and then change the StateFun dependency in your 
application to `2.3-SNAPSHOT`.
   
   ---
   
   Let me briefly explain our release plans here to address the issue you 
reported, and why the above adjustment makes sense:
   
   1. With the StateFun changes in 
https://github.com/apache/flink-statefun/pull/168 (and not including ANY Flink 
changes), we're expecting that restoring from checkpoints / savepoints should 
work properly now for all checkpoints / savepoints taken with a new version 
that includes https://github.com/apache/flink-statefun/pull/168. This would 
already address FLINK-19692, and we're planning to push out a StateFun hotfix 
release immediately to unblock you and other users that may be encountering the 
same issue.
   
   2. What https://github.com/apache/flink-statefun/pull/168 doesn't yet solve, 
is the ability to safely restore / upgrade from a savepoint taken with StateFun 
versions <= 2.2.0. Enabling that requires this PR and #13761 to be fixed in 
Flink, release a new Flink version, and ultimately yet another follow-up 
StateFun hotfix releases that uses the new Flink version. That is a lengthier 
process, with an estimate of another 3-4 weeks.
   
   ---
   
   TL;DR: It would be tremendously helpful if you can re-do your experiment 
only with a new StateFun build including 
https://github.com/apache/flink-statefun/pull/168 alone. Please do let me know 
of the results!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13458: FLINK-18851 [runtime] Add checkpoint type to checkpoint history entries in Web UI

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13458:
URL: https://github.com/apache/flink/pull/13458#issuecomment-697041219


   
   ## CI report:
   
   * 7311b0d12d19a645391ea0359a9aa6318806363b UNKNOWN
   * 3676c7e7a4729c494e12e82f4df4f2617ef29b5d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8462)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on a change in pull request #13644: [FLINK-19542][k8s]Implement LeaderElectionService and LeaderRetrievalService based on Kubernetes API

2020-10-28 Thread GitBox


wangyang0918 commented on a change in pull request #13644:
URL: https://github.com/apache/flink/pull/13644#discussion_r51326



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/leaderretrieval/LeaderRetrievalDriver.java
##
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.leaderretrieval;
+
+/**
+ * A {@link LeaderRetrievalDriver} is responsible for retrieves the current 
leader which has been elected by the
+ * {@link org.apache.flink.runtime.leaderelection.LeaderElectionDriver}.
+ */
+public interface LeaderRetrievalDriver extends AutoCloseable {
+
+   /**
+* Close the services used for leader retrieval.
+*/
+   void close() throws Exception;

Review comment:
   I will make `LeaderRetrievalDriver` and `LeaderElectionDriver` not 
extend from `AutoCloseable`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tzulitai edited a comment on pull request #13773: [backport-1.11] [FLINK-19748] Iterating key groups in raw keyed stream on restore fails if some key groups weren't written

2020-10-28 Thread GitBox


tzulitai edited a comment on pull request #13773:
URL: https://github.com/apache/flink/pull/13773#issuecomment-717730004


   @Antti-Kaikkonen thanks you for trying the branch out.
   
   I think the exceptions you encountered are expected in the experiments 
you've tried out.
   
   Can you adjust your experiments to do the following, and then report back 
again?:
   
   Try out your application `FlinkStatefunCountTo1M` with a new build of 
StateFun that includes the changes in 
https://github.com/apache/flink-statefun/pull/168?
   
   You should be able to just pull that branch, do a clean build (`mvn clean 
install -DskipTests`), and then change the StateFun dependency in your 
application to `2.3-SNAPSHOT`.
   
   You should create a savepoint, and try to restore as you did in your 
previous test.
   Note that you should not need to apply any Flink fixes for this.
   
   ---
   
   Let me briefly explain our release plans here to address the issue you 
reported, and why the above adjustment makes sense:
   
   1. With the StateFun changes in 
https://github.com/apache/flink-statefun/pull/168 (and not including ANY Flink 
changes), we're expecting that restoring from checkpoints / savepoints should 
work properly now for all checkpoints / savepoints taken with a new version 
that includes https://github.com/apache/flink-statefun/pull/168. This would 
already address FLINK-19692, and we're planning to push out a StateFun hotfix 
release immediately to unblock you and other users that may be encountering the 
same issue.
   
   2. What https://github.com/apache/flink-statefun/pull/168 doesn't yet solve, 
is the ability to safely restore / upgrade from a savepoint taken with StateFun 
versions <= 2.2.0. This does not affect you if you don't have StateFun 
applications running in production yet. Enabling this requires this PR and 
#13761 to be fixed in Flink, release a new Flink version, and ultimately yet 
another follow-up StateFun hotfix releases that uses the new Flink version. 
That is a lengthier process, with an estimate of another 3-4 weeks, so we 
decided to go ahead with the above option first to move faster.
   
   ---
   
   TL;DR: It would be tremendously helpful if you can re-do your experiment 
only with a new StateFun build including 
https://github.com/apache/flink-statefun/pull/168 alone. Please do let me know 
of the results!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13744: [FLINK-19766][table-runtime] Introduce File streaming compaction operators

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13744:
URL: https://github.com/apache/flink/pull/13744#issuecomment-714369975


   
   ## CI report:
   
   * 164c8b5b7750ed567813d70f160ef9f064d0a3b0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8458)
 
   * 322bc2c3c11a0f5735db4a475864f894f38866da UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18811) if a disk is damaged, taskmanager should choose another disk for temp dir , rather than throw an IOException, which causes flink job restart over and over again

2020-10-28 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-18811:
--

Assignee: Kai Chen

> if a disk is damaged, taskmanager should choose another disk for temp dir , 
> rather than throw an IOException, which causes flink job restart over and 
> over again
> 
>
> Key: FLINK-18811
> URL: https://issues.apache.org/jira/browse/FLINK-18811
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
> Environment: flink-1.10
>Reporter: Kai Chen
>Assignee: Kai Chen
>Priority: Major
>  Labels: pull-request-available
> Attachments: flink_disk_error.png
>
>
> I met this Exception when a hard disk was damaged:
> !flink_disk_error.png!
> I checked the code and found that flink will create a temp file  when Record 
> length > 5 MB:
>  
> {code:java}
> // SpillingAdaptiveSpanningRecordDeserializer.java
> if (nextRecordLength > THRESHOLD_FOR_SPILLING) {
>// create a spilling channel and put the data there
>this.spillingChannel = createSpillingChannel();
>ByteBuffer toWrite = partial.segment.wrap(partial.position, numBytesChunk);
>FileUtils.writeCompletely(this.spillingChannel, toWrite);
> }
> {code}
> The tempDir is random picked from all `tempDirs`. Well on yarn mode, one 
> `tempDir`  usually represents one hard disk.
>  
> In may opinion, if a hard disk is damaged, taskmanager should pick another 
> disk(tmpDir) for Spilling Channel, rather than throw an IOException, which 
> causes flink job restart over and over again.
> If we could just change “SpillingAdaptiveSpanningRecordDeserializer" like 
> this:
> {code:java}
> // SpillingAdaptiveSpanningRecordDeserializer.java
> private FileChannel createSpillingChannel() throws IOException {
>if (spillFile != null) {
>   throw new IllegalStateException("Spilling file already exists.");
>}
>// try to find a unique file name for the spilling channel
>int maxAttempts = 10;
>String[] tempDirs = this.tempDirs;
>for (int attempt = 0; attempt < maxAttempts; attempt++) {
>   int dirIndex = rnd.nextInt(tempDirs.length);
>   String directory = tempDirs[dirIndex];
>   spillFile = new File(directory, randomString(rnd) + ".inputchannel");
>   try {
>  if (spillFile.createNewFile()) {
> return new RandomAccessFile(spillFile, "rw").getChannel();
>  }
>   } catch (IOException e) {
>  // if there is no tempDir left to try
>  if(tempDirs.length <= 1) {
> throw e;
>  }
>  LOG.warn("Caught an IOException when creating spill file: " + 
> directory + ". Attempt " + attempt, e);
>  tempDirs = (String[])ArrayUtils.remove(tempDirs,dirIndex);
>   }
>}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18811) if a disk is damaged, taskmanager should choose another disk for temp dir , rather than throw an IOException, which causes flink job restart over and over again

2020-10-28 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-18811.
--
Fix Version/s: 1.12.0
   Resolution: Fixed

merged commit e38716f into apache:master

> if a disk is damaged, taskmanager should choose another disk for temp dir , 
> rather than throw an IOException, which causes flink job restart over and 
> over again
> 
>
> Key: FLINK-18811
> URL: https://issues.apache.org/jira/browse/FLINK-18811
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
> Environment: flink-1.10
>Reporter: Kai Chen
>Assignee: Kai Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: flink_disk_error.png
>
>
> I met this Exception when a hard disk was damaged:
> !flink_disk_error.png!
> I checked the code and found that flink will create a temp file  when Record 
> length > 5 MB:
>  
> {code:java}
> // SpillingAdaptiveSpanningRecordDeserializer.java
> if (nextRecordLength > THRESHOLD_FOR_SPILLING) {
>// create a spilling channel and put the data there
>this.spillingChannel = createSpillingChannel();
>ByteBuffer toWrite = partial.segment.wrap(partial.position, numBytesChunk);
>FileUtils.writeCompletely(this.spillingChannel, toWrite);
> }
> {code}
> The tempDir is random picked from all `tempDirs`. Well on yarn mode, one 
> `tempDir`  usually represents one hard disk.
>  
> In may opinion, if a hard disk is damaged, taskmanager should pick another 
> disk(tmpDir) for Spilling Channel, rather than throw an IOException, which 
> causes flink job restart over and over again.
> If we could just change “SpillingAdaptiveSpanningRecordDeserializer" like 
> this:
> {code:java}
> // SpillingAdaptiveSpanningRecordDeserializer.java
> private FileChannel createSpillingChannel() throws IOException {
>if (spillFile != null) {
>   throw new IllegalStateException("Spilling file already exists.");
>}
>// try to find a unique file name for the spilling channel
>int maxAttempts = 10;
>String[] tempDirs = this.tempDirs;
>for (int attempt = 0; attempt < maxAttempts; attempt++) {
>   int dirIndex = rnd.nextInt(tempDirs.length);
>   String directory = tempDirs[dirIndex];
>   spillFile = new File(directory, randomString(rnd) + ".inputchannel");
>   try {
>  if (spillFile.createNewFile()) {
> return new RandomAccessFile(spillFile, "rw").getChannel();
>  }
>   } catch (IOException e) {
>  // if there is no tempDir left to try
>  if(tempDirs.length <= 1) {
> throw e;
>  }
>  LOG.warn("Caught an IOException when creating spill file: " + 
> directory + ". Attempt " + attempt, e);
>  tempDirs = (String[])ArrayUtils.remove(tempDirs,dirIndex);
>   }
>}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski merged pull request #13810: [FLINK-18811][network] Pick another tmpDir if an IOException occurs w…

2020-10-28 Thread GitBox


pnowojski merged pull request #13810:
URL: https://github.com/apache/flink/pull/13810


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangxlong opened a new pull request #13816: [BP-1.11][FLINK-19587][table-planner-blink] Fix error result when casting bina…

2020-10-28 Thread GitBox


wangxlong opened a new pull request #13816:
URL: https://github.com/apache/flink/pull/13816


   ## What is the purpose of the change
   
   This is a backport of FLINK-19587
   cherry picked from commit d9b0ac97ee4675aebdab1592af663b95fdc5051b



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yuchuanchen commented on pull request #13718: [FLINK-18811] Pick another tmpDir if an IOException occurs when creating spill file

2020-10-28 Thread GitBox


yuchuanchen commented on pull request #13718:
URL: https://github.com/apache/flink/pull/13718#issuecomment-717758921


   Thanks @pnowojski



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #13796: [FLINK-19810][CI] Automatically run a basic NOTICE file check on CI

2020-10-28 Thread GitBox


rmetzger commented on pull request #13796:
URL: https://github.com/apache/flink/pull/13796#issuecomment-717761562


   Thanks a lot for taking a look. I believe the statement by the tool is 
correct: The notice file contains the following dependency: 
`com.apache.commons:commons-compress:1.20`, but the expected dependency is 
`org.apache.commons:commons-compress:1.20`. Notice the `com.` vs `org.`
   
   Do you agree to how the tool is integrated into the build process in 
priciple? If so, I will check and fix all dependency issues found by the tool, 
so that we can merge it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19846) Grammar mistakes in annotations and log

2020-10-28 Thread zhouchao (Jira)
zhouchao created FLINK-19846:


 Summary: Grammar mistakes in annotations and log
 Key: FLINK-19846
 URL: https://issues.apache.org/jira/browse/FLINK-19846
 Project: Flink
  Issue Type: Wish
Affects Versions: 1.11.2
Reporter: zhouchao
 Fix For: 1.12.0


There exit some grammar mistakes in annotations and documents. The mistakes 
include but are not limited to the following examples:
 * a entry in  WebLogAnalysis.java  [246:34] and adm-zip.js [291:33](which 
should be an entry)
 * a input in JobGraphGenerator.java [1125:69] etc(which should be an 
input)
 * a intersection 
 * an user-* in Table.java etc.  (which should be a user)

using global search in intellij idea, more mistakes could be foud like this.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13756: [FLINK-19770][python][test] Changed the PythonProgramOptionTest to be an ITCase.

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13756:
URL: https://github.com/apache/flink/pull/13756#issuecomment-714895505


   
   ## CI report:
   
   * 9c215d1339aebe0b7249dc6e06b5a5d5d74c25c5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8368)
 
   * 37447c7adbd68701dbce2f515ff69af119be5a5b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13744: [FLINK-19766][table-runtime] Introduce File streaming compaction operators

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13744:
URL: https://github.com/apache/flink/pull/13744#issuecomment-714369975


   
   ## CI report:
   
   * 164c8b5b7750ed567813d70f160ef9f064d0a3b0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8458)
 
   * 322bc2c3c11a0f5735db4a475864f894f38866da Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8466)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19847) Can we create a fast support on the Nested table join?

2020-10-28 Thread xiaogang zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222033#comment-17222033
 ] 

xiaogang zhou edited comment on FLINK-19847 at 10/28/20, 8:20 AM:
--

In our situation, we use a Asynclookup for the redis and some inner rpc 
service. we combine multikey lookup to save the network cost. so the DDL uses 
the array type. 

when i try to join on array, some exception happens. and I notice the todo. 
looks like something need to be done in the 

extractConstantField? 


was (Author: zhoujira86):
In our situation, we use a Asynclookup for the redis and some inner rpc 
service. we combine multikey lookup to save the network cost. so the DDL uses 
the array type. 

where i try to join on array, some exception happens. and I notice the todo. 
looks like something need to be done in the 

extractConstantField? 

> Can we create a fast support on the Nested table join?
> --
>
> Key: FLINK-19847
> URL: https://issues.apache.org/jira/browse/FLINK-19847
> Project: Flink
>  Issue Type: Wish
>  Components: API / DataStream
>Affects Versions: 1.11.1
>Reporter: xiaogang zhou
>Priority: Major
>
> In CommonLookupJoin, one TODO is 
> support nested lookup keys in the future,
> // currently we only support top-level lookup keys
>  
> can we create a fast support on the Array join? thx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19847) Can we create a fast support on the Nested table join?

2020-10-28 Thread xiaogang zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222033#comment-17222033
 ] 

xiaogang zhou commented on FLINK-19847:
---

In our situation, we use a Asynclookup for the redis and some inner rpc 
service. we combine multikey lookup to save the network cost. so the DDL uses 
the array type. 

where i try to join on array, some exception happens. and I notice the todo. 
looks like something need to be done in the 

extractConstantField? 

> Can we create a fast support on the Nested table join?
> --
>
> Key: FLINK-19847
> URL: https://issues.apache.org/jira/browse/FLINK-19847
> Project: Flink
>  Issue Type: Wish
>  Components: API / DataStream
>Affects Versions: 1.11.1
>Reporter: xiaogang zhou
>Priority: Major
>
> In CommonLookupJoin, one TODO is 
> support nested lookup keys in the future,
> // currently we only support top-level lookup keys
>  
> can we create a fast support on the Array join? thx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys commented on a change in pull request #13763: [FLINK-19779][avro] Remove the record_ field name prefix for Confluen…

2020-10-28 Thread GitBox


dawidwys commented on a change in pull request #13763:
URL: https://github.com/apache/flink/pull/13763#discussion_r513252724



##
File path: 
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSchemaConverter.java
##
@@ -297,17 +299,22 @@ private static DataType convertToDataType(Schema schema) {
 * @return Avro's {@link Schema} matching this logical type.
 */
public static Schema convertToSchema(LogicalType logicalType) {
-   return convertToSchema(logicalType, "record");
+   return convertToSchema(logicalType, "record", true);
}
 
/**
 * Converts Flink SQL {@link LogicalType} (can be nested) into an Avro 
schema.
 *
 * @param logicalType logical type
 * @param rowName the record name
+* @param top whether it is parsing the root record,
+*if it is, the logical type nullability would be 
ignored
 * @return Avro's {@link Schema} matching this logical type.
 */
-   public static Schema convertToSchema(LogicalType logicalType, String 
rowName) {
+   public static Schema convertToSchema(
+   LogicalType logicalType,
+   String rowName,
+   boolean top) {

Review comment:
   > It is not true, a row (null, null) is nullable true. And i don't think 
it makes sense to change the planner behavior in general in order to fix a 
specific use case.
   
   Excuse me, but I wholeheartedly disagree with your statement. null =/= 
(null, null). (null, null) is still `NOT NULL`. A whole row in a Table can not 
be null. Only particular columns can be null. Therefore the top level row of a 
Table is always `NOT NULL`.
   
   I am not suggesting changing planner behaviour for a particular use case. 
The planner should always produce `NOT NULL` type for a top level row of a 
Table. If it doesn't, it is a bug.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on a change in pull request #13763: [FLINK-19779][avro] Remove the record_ field name prefix for Confluen…

2020-10-28 Thread GitBox


dawidwys commented on a change in pull request #13763:
URL: https://github.com/apache/flink/pull/13763#discussion_r513253682



##
File path: 
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSchemaConverter.java
##
@@ -297,17 +299,22 @@ private static DataType convertToDataType(Schema schema) {
 * @return Avro's {@link Schema} matching this logical type.
 */
public static Schema convertToSchema(LogicalType logicalType) {
-   return convertToSchema(logicalType, "record");
+   return convertToSchema(logicalType, "record", true);
}
 
/**
 * Converts Flink SQL {@link LogicalType} (can be nested) into an Avro 
schema.
 *
 * @param logicalType logical type
 * @param rowName the record name
+* @param top whether it is parsing the root record,
+*if it is, the logical type nullability would be 
ignored
 * @return Avro's {@link Schema} matching this logical type.
 */
-   public static Schema convertToSchema(LogicalType logicalType, String 
rowName) {
+   public static Schema convertToSchema(
+   LogicalType logicalType,
+   String rowName,
+   boolean top) {

Review comment:
   > I don't think we should let each invoker to decide whether to make the 
data type not null, because in current codebase, we should always do that, make 
the decision everyone is error-prone and hard to maintain.
   
   I agree making the same decision over and over again at multiple location is 
error prone and hard to maintain and that's what I want to avoid.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] liuyongvs opened a new pull request #13818: [FLINK-19848][docs] fix docs bug of build flink.

2020-10-28 Thread GitBox


liuyongvs opened a new pull request #13818:
URL: https://github.com/apache/flink/pull/13818


   
   
   ## What is the purpose of the change
   
   *fix docs bug of build flink.*
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (docs)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19848) flink docs of Building Flink from Source bug

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19848:
---
Labels: pull-request-available  (was: )

> flink docs of Building Flink from Source bug
> 
>
> Key: FLINK-19848
> URL: https://issues.apache.org/jira/browse/FLINK-19848
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: jackylau
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> To speed up the build you can skip tests, QA plugins, and JavaDocs:
>  
> {{mvn clean install -DskipTests -Dfast}}
>  
> {{mvn clean install -DskipTests -Dscala-2.12}}
> {{fast and }}{{scala-2.12}}{{ is profile, not properties}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19834) Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-19834:
---
Component/s: Tests

> Make the TestSink reusable in all the sink related tests.
> -
>
> Key: FLINK-19834
> URL: https://issues.apache.org/jira/browse/FLINK-19834
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Tests
>Affects Versions: 1.12.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19834) Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-19834:
---
Affects Version/s: 1.12.0

> Make the TestSink reusable in all the sink related tests.
> -
>
> Key: FLINK-19834
> URL: https://issues.apache.org/jira/browse/FLINK-19834
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.12.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19834) Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-19834:
---
Fix Version/s: 1.12.0

> Make the TestSink reusable in all the sink related tests.
> -
>
> Key: FLINK-19834
> URL: https://issues.apache.org/jira/browse/FLINK-19834
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.12.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19834) Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-19834:
--

Assignee: Guowei Ma

> Make the TestSink reusable in all the sink related tests.
> -
>
> Key: FLINK-19834
> URL: https://issues.apache.org/jira/browse/FLINK-19834
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] gm7y8 commented on pull request #13458: FLINK-18851 [runtime] Add checkpoint type to checkpoint history entries in Web UI

2020-10-28 Thread GitBox


gm7y8 commented on pull request #13458:
URL: https://github.com/apache/flink/pull/13458#issuecomment-717796453


   > Thanks @gm7y8 . Additionally, I verified manually that the changes result 
in the expected behavior. Let's wait for @vthinkxie to get back to us to review 
the code itself.
   
   @XComp @vthinkxie  I just verified the fix after making suggested 
enhancements. it is working as expected.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuxiaoshang opened a new pull request #13819: [hotfix][docs]fix some mistakes in StreamingFileSink

2020-10-28 Thread GitBox


zhuxiaoshang opened a new pull request #13819:
URL: https://github.com/apache/flink/pull/13819


   ## What is the purpose of the change
   
   *fix some mistakes in StreamingFileSink*
   
   
   ## Brief change log
 - *fix some mistakes in StreamingFileSink*
   
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (no)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun commented on pull request #13789: [FLINK-19727][table-runtime] Implement ParallelismProvider for sink i…

2020-10-28 Thread GitBox


shouweikun commented on pull request #13789:
URL: https://github.com/apache/flink/pull/13789#issuecomment-717800065


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19847) Can we create a fast support on the Nested table join?

2020-10-28 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-19847:
-
Component/s: (was: API / DataStream)
 Table SQL / Runtime
 Table SQL / Planner

> Can we create a fast support on the Nested table join?
> --
>
> Key: FLINK-19847
> URL: https://issues.apache.org/jira/browse/FLINK-19847
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: xiaogang zhou
>Priority: Major
>
> In CommonLookupJoin, one TODO is 
> support nested lookup keys in the future,
> // currently we only support top-level lookup keys
>  
> can we create a fast support on the Array join? thx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19851) flink sql client connector type jdbc exception

2020-10-28 Thread hulingchan (Jira)
hulingchan created FLINK-19851:
--

 Summary: flink sql client connector type jdbc exception
 Key: FLINK-19851
 URL: https://issues.apache.org/jira/browse/FLINK-19851
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.11.1
 Environment: hardware: Mac Pro

software: MacOS

 
Reporter: hulingchan


When I want to experience the SQL client using jdbc as the source, there is a 
problem.

*run command*:

./sql-client.sh embedded -e ../conf/sql-client-demo.yaml

*sql-client-demo.yaml conttent*:
{code:java}
tables:
  - name: mysql_test
type: source-table
connector:
type: jdbc
property-version: 1
url: 
jdbc:mysql://127.0.0.1:3306/test?useUnicode=true=utf8=true=Asia/Shanghai
table: book_info
driver: com.mysql.jdbc.Driver
username: lloo
password: dsfsdf
{code}
*log below*:
{code:java}
No default environment specified.
 Searching for 'flink-1.11.1/conf/sql-client-defaults.yaml'...found.
 Reading default environment from: 
file:flink-1.11.1/conf/sql-client-defaults.yaml
 Reading session environment from: 
file:flink-1.11.1/bin/../conf/sql-client-demo.yaml
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue.
 at org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)
 Caused by: org.apache.flink.table.client.gateway.SqlExecutionException: Could 
not create execution context.
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
 at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108)
 at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
 Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
not find a suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in
 the classpath.
Reason: Required context properties mismatch.
The following properties are requested:
 connector.driver=com.mysql.jdbc.Driver
 connector.password=123456
 connector.property-version=1
 connector.table=durotar_wx_user_info
 connector.type=jdbc
 
connector.url=jdbc:mysql://qa.vm.com:3306/zh_portal?useUnicode=true=utf8=true=Asia/Shanghai
 connector.username=root
The following factories have been considered:
 org.apache.flink.table.sources.CsvBatchTableSourceFactory
 org.apache.flink.table.sources.CsvAppendTableSourceFactory
 org.apache.flink.table.filesystem.FileSystemTableFactory
 at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
 at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
 at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
 at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
 at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
 ... 3 more
  
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13744: [FLINK-19766][table-runtime] Introduce File streaming compaction operators

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13744:
URL: https://github.com/apache/flink/pull/13744#issuecomment-714369975


   
   ## CI report:
   
   * 164c8b5b7750ed567813d70f160ef9f064d0a3b0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8458)
 
   * 322bc2c3c11a0f5735db4a475864f894f38866da Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8466)
 
   * a01ecf0bade9f8e4a56052fb5b5d25c5034fe511 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8486)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13697: [FLINK-19357][FLINK-19357][fs-connector] Introduce createBucketWriter to BucketsBuilder & Introduce FileLifeCycleListener to Buckets

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13697:
URL: https://github.com/apache/flink/pull/13697#issuecomment-712672697


   
   ## CI report:
   
   * b45885955bfdee4d4288f9273171e1d52379773f Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7914)
 
   * bd25177243a782fad2e7f9d9ff508e6ba3758303 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13789: [FLINK-19727][table-runtime] Implement ParallelismProvider for sink i…

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13789:
URL: https://github.com/apache/flink/pull/13789#issuecomment-716299701


   
   ## CI report:
   
   * 6329930d7acc5d2d4d84e777561e79bd25ac9367 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8265)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8487)
 
   * 934e4f0acf9044de5c6cb73ae25d094396bfc146 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19852) Managed memory released check can block IterativeTask

2020-10-28 Thread shaomeng.wang (Jira)
shaomeng.wang created FLINK-19852:
-

 Summary: Managed memory released check can block IterativeTask
 Key: FLINK-19852
 URL: https://issues.apache.org/jira/browse/FLINK-19852
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Task
Affects Versions: 1.11.2, 1.11.1, 1.10.2, 1.11.0
Reporter: shaomeng.wang
 Attachments: image-2020-10-28-17-48-28-395.png, 
image-2020-10-28-17-48-48-583.png

UnsafeMemoryBudget#reserveMemory, called on TempBarrier, needs time to wait on 
GC of all allocated/released managed memory at every iteration.

 

stack:

!image-2020-10-28-17-48-48-583.png!

new TempBarrier in BatchTask

!image-2020-10-28-17-48-28-395.png!

 

These will be very slow than before.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on pull request #13574: [FLINK-18323] Add a Kafka source implementation based on FLIP-27.

2020-10-28 Thread GitBox


becketqin commented on pull request #13574:
URL: https://github.com/apache/flink/pull/13574#issuecomment-717837286


   @StephanEwen I am trying to address the blocking close() method issue. 
However, it is a little more complicated than I thought. I want to align some 
design principle with you and see what might be the solution there. 
   
   From what I understand the current design principle for the 
`SplitEnumerator` is that all the exception handling are synchronous, i.e. the 
implementation should throw exception when a method is invoked. If no exception 
was thrown, the method invocation is considered successful.
   
   In this case, if we want to allow the asynchronous handling and exception 
propagation in `SplitEnumerator`, such as asynchronous close(), we will need to 
have a `failJob()` method in the `SplitEnumeratorContext`. So users can call 
`failJob()` asynchronously instead of throw exceptions when closing in a 
non-blocking manner. The only caveat to this solution is that now users can 
either throw exception or call `failJob()` when the methods are invoked and 
people may wonder what is the difference.
   
   To avoid the above caveat, I was thinking that we can add the async close to 
the `SourceCoordinator` so we don't have to add `failJob()` method to the 
`SplitEnumerator`. However, in practice, sometimes the previous instance of 
`SplitEnumerator` must be successfully closed before the next instance can be 
created. Otherwise there might be conflicts. Therefore, naively having 
non-blocking closing in the `SourceCoordinator` won't work in all cases.
   
   Given all the above thoughts, I am falling back to the solution of adding a 
`failJob()` method to the `SplitEnumeratorContext` so the `SplitEnumerator` 
implementations can decide by themselves what to do in each method. And any 
exception thrown from the method invocation will just result in the job failure.
   
   Do you have any suggestion?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19851) flink sql client connector type jdbc exception

2020-10-28 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19851.
---
Resolution: Not A Problem

> flink sql client connector type jdbc exception
> --
>
> Key: FLINK-19851
> URL: https://issues.apache.org/jira/browse/FLINK-19851
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.11.1
> Environment: hardware: Mac Pro
> software: MacOS
>  
>Reporter: hulingchan
>Priority: Major
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When I want to experience the SQL client using jdbc as the source, there is a 
> problem.
> *run command*:
> ./sql-client.sh embedded -e ../conf/sql-client-demo.yaml
> *sql-client-demo.yaml conttent*:
> {code:java}
> tables:
>   - name: mysql_test
> type: source-table
> connector:
> type: jdbc
> property-version: 1
> url: 
> jdbc:mysql://127.0.0.1:3306/test?useUnicode=true=utf8=true=Asia/Shanghai
> table: book_info
> driver: com.mysql.jdbc.Driver
> username: lloo
> password: dsfsdf
> {code}
> *log below*:
> {code:java}
> No default environment specified.
>  Searching for 'flink-1.11.1/conf/sql-client-defaults.yaml'...found.
>  Reading default environment from: 
> file:flink-1.11.1/conf/sql-client-defaults.yaml
>  Reading session environment from: 
> file:flink-1.11.1/bin/../conf/sql-client-demo.yaml
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)
>  Caused by: org.apache.flink.table.client.gateway.SqlExecutionException: 
> Could not create execution context.
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108)
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
>  Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
> not find a suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
>  the classpath.
> Reason: Required context properties mismatch.
> The following properties are requested:
>  connector.driver=com.mysql.jdbc.Driver
>  connector.password=123456
>  connector.property-version=1
>  connector.table=durotar_wx_user_info
>  connector.type=jdbc
>  
> connector.url=jdbc:mysql://qa.vm.com:3306/zh_portal?useUnicode=true=utf8=true=Asia/Shanghai
>  connector.username=root
> The following factories have been considered:
>  org.apache.flink.table.sources.CsvBatchTableSourceFactory
>  org.apache.flink.table.sources.CsvAppendTableSourceFactory
>  org.apache.flink.table.filesystem.FileSystemTableFactory
>  at 
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>  at 
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>  at 
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>  at 
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
>  at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
>  ... 3 more
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19851) flink sql client connector type jdbc exception

2020-10-28 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222081#comment-17222081
 ] 

Jark Wu commented on FLINK-19851:
-

Hi [~hulingchan], the exception tells that there is no JDBC factory found in 
the classpath. That means you may not put the {{flink-connector-jdbc}} jar 
under the Flink cluster classpath. Besides, registering tables in sql-client 
yaml is not suggested anymore, it is recommended to use DDL to register tables. 
See more details in the documentation: 
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html

> flink sql client connector type jdbc exception
> --
>
> Key: FLINK-19851
> URL: https://issues.apache.org/jira/browse/FLINK-19851
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.11.1
> Environment: hardware: Mac Pro
> software: MacOS
>  
>Reporter: hulingchan
>Priority: Major
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> When I want to experience the SQL client using jdbc as the source, there is a 
> problem.
> *run command*:
> ./sql-client.sh embedded -e ../conf/sql-client-demo.yaml
> *sql-client-demo.yaml conttent*:
> {code:java}
> tables:
>   - name: mysql_test
> type: source-table
> connector:
> type: jdbc
> property-version: 1
> url: 
> jdbc:mysql://127.0.0.1:3306/test?useUnicode=true=utf8=true=Asia/Shanghai
> table: book_info
> driver: com.mysql.jdbc.Driver
> username: lloo
> password: dsfsdf
> {code}
> *log below*:
> {code:java}
> No default environment specified.
>  Searching for 'flink-1.11.1/conf/sql-client-defaults.yaml'...found.
>  Reading default environment from: 
> file:flink-1.11.1/conf/sql-client-defaults.yaml
>  Reading session environment from: 
> file:flink-1.11.1/bin/../conf/sql-client-demo.yaml
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)
>  Caused by: org.apache.flink.table.client.gateway.SqlExecutionException: 
> Could not create execution context.
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108)
>  at org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)
>  Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could 
> not find a suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' in
>  the classpath.
> Reason: Required context properties mismatch.
> The following properties are requested:
>  connector.driver=com.mysql.jdbc.Driver
>  connector.password=123456
>  connector.property-version=1
>  connector.table=durotar_wx_user_info
>  connector.type=jdbc
>  
> connector.url=jdbc:mysql://qa.vm.com:3306/zh_portal?useUnicode=true=utf8=true=Asia/Shanghai
>  connector.username=root
> The following factories have been considered:
>  org.apache.flink.table.sources.CsvBatchTableSourceFactory
>  org.apache.flink.table.sources.CsvAppendTableSourceFactory
>  org.apache.flink.table.filesystem.FileSystemTableFactory
>  at 
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>  at 
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>  at 
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>  at 
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
>  at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
>  ... 3 more
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13620: [hotfix][doc] the first FSDataOutputStream should be FSDataInputStream in FileSystem#L178

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13620:
URL: https://github.com/apache/flink/pull/13620#issuecomment-707904494


   
   ## CI report:
   
   * 16f95599c033e72ce486b06dd3d527abd656b1a9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7546)
 
   * 4290129d21b0600e5d17b1c9db54c9416a861b2c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8489)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] StephanEwen commented on a change in pull request #13784: [FLINK-19698][connectors/common] API improvements to the Sources.

2020-10-28 Thread GitBox


StephanEwen commented on a change in pull request #13784:
URL: https://github.com/apache/flink/pull/13784#discussion_r513286480



##
File path: flink-core/src/main/java/org/apache/flink/util/ExceptionUtils.java
##
@@ -565,6 +565,20 @@ else if (t instanceof Error) {
return Optional.empty();
}
 
+   /**
+* Find the root cause of the given throwable chain.
+*
+* @param throwable the throwable chain to check.
+* @return the root cause of the throwable chain.
+*/
+   public static Throwable findRootCause(Throwable throwable) {

Review comment:
   I think this needs an extra check against cyclic cause chains, which are 
possible and nasty.
   
   We previously used the Apache Commons Lang for that: 
https://github.com/apache/flink/blob/master/flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobCacheGetTest.java#L597
   
   I think at the very least, we need a set to check duplicates and stop 
traversing once we see an exception we saw before.

##
File path: 
flink-core/src/main/java/org/apache/flink/api/common/state/CheckpointListener.java
##
@@ -0,0 +1,51 @@
+/*
+ Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   Nit: The header here uses a different style than the rest of the code 
(other files have * at every line start).

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/state/CheckpointListener.java
##
@@ -1,19 +1,19 @@
 /*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
+ Licensed to the Apache Software Foundation (ASF) under one

Review comment:
   Nit: The header here uses a different style than the rest of the code 
(other files have * at every line start).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13789: [FLINK-19727][table-runtime] Implement ParallelismProvider for sink i…

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13789:
URL: https://github.com/apache/flink/pull/13789#issuecomment-716299701


   
   ## CI report:
   
   * 6329930d7acc5d2d4d84e777561e79bd25ac9367 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8265)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8487)
 
   * 934e4f0acf9044de5c6cb73ae25d094396bfc146 UNKNOWN
   * fd0aa012ed1a15803fe2ed75180d60e6c9fe8d6d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13784: [FLINK-19698][connectors/common] API improvements to the Sources.

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13784:
URL: https://github.com/apache/flink/pull/13784#issuecomment-716153889


   
   ## CI report:
   
   * ca695f256a825fdebadf79f576b0875d4faea7cd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8450)
 
   * f0dec4fa5d8e69761377a45df57106a4fbfe8152 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19854) TableScanTest.testTemporalJoinOnUpsertSource fails

2020-10-28 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-19854:
--

 Summary: TableScanTest.testTemporalJoinOnUpsertSource fails
 Key: FLINK-19854
 URL: https://issues.apache.org/jira/browse/FLINK-19854
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.12.0
Reporter: Robert Metzger
 Fix For: 1.12.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8482=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=a6e0f756-5bb9-5ea8-a468-5f60db442a29

org.junit.ComparisonFailure: planAfter expected:<...me, 
__TEMPORAL_JOIN_[]LEFT_KEY(currency), ...> but was:<...me, 
__TEMPORAL_JOIN_[CONDITION_PRIMARY_KEY(currency0), 
__TEMPORAL_JOIN_]LEFT_KEY(currency), ...>
at org.junit.Assert.assertEquals(Assert.java:115)
at 
org.apache.flink.table.planner.utils.DiffRepository.assertEquals(DiffRepository.java:436)
at 
org.apache.flink.table.planner.utils.TableTestUtilBase.assertEqualsOrExpand(TableTestBase.scala:478)
at 
org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:362)
at 
org.apache.flink.table.planner.utils.TableTestUtilBase.verifyPlan(TableTestBase.scala:275)
at 
org.apache.flink.table.planner.plan.stream.sql.TableScanTest.testTemporalJoinOnUpsertSource(TableScanTest.scala:515)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19833) Rename Sink API Writer interface to SinkWriter

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-19833:
---
Fix Version/s: 1.12.0

> Rename Sink API Writer interface to SinkWriter
> --
>
> Key: FLINK-19833
> URL: https://issues.apache.org/jira/browse/FLINK-19833
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.12.0
>Reporter: Aljoscha Krettek
>Assignee: Guowei Ma
>Priority: Major
> Fix For: 1.12.0
>
>
> This makes it more consistent with {{SourceReader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19833) Rename Sink API Writer interface to SinkWriter

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-19833:
--

Assignee: Guowei Ma  (was: Kostas Kloudas)

> Rename Sink API Writer interface to SinkWriter
> --
>
> Key: FLINK-19833
> URL: https://issues.apache.org/jira/browse/FLINK-19833
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Assignee: Guowei Ma
>Priority: Major
>
> This makes it more consistent with {{SourceReader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13819: [hotfix][docs]fix some mistakes in StreamingFileSink

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13819:
URL: https://github.com/apache/flink/pull/13819#issuecomment-717834371


   
   ## CI report:
   
   * e4cf3eb0b429e4ffe12a5053e0d8110c2eb50693 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8493)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13817: [HotFix][docs] Fix broken link of docker.md

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13817:
URL: https://github.com/apache/flink/pull/13817#issuecomment-717771629


   
   ## CI report:
   
   * f7fcad2291f072819e6576b291327abd16947445 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19833) Rename Sink API Writer interface to SinkWriter

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-19833:
---
Affects Version/s: 1.12.0

> Rename Sink API Writer interface to SinkWriter
> --
>
> Key: FLINK-19833
> URL: https://issues.apache.org/jira/browse/FLINK-19833
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Affects Versions: 1.12.0
>Reporter: Aljoscha Krettek
>Assignee: Guowei Ma
>Priority: Major
>
> This makes it more consistent with {{SourceReader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19833) Rename Sink API Writer interface to SinkWriter

2020-10-28 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-19833:
--

Assignee: Kostas Kloudas

> Rename Sink API Writer interface to SinkWriter
> --
>
> Key: FLINK-19833
> URL: https://issues.apache.org/jira/browse/FLINK-19833
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Assignee: Kostas Kloudas
>Priority: Major
>
> This makes it more consistent with {{SourceReader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] StephanEwen commented on pull request #13595: [FLINK-19582][network] Introduce sort-merge based blocking shuffle to Flink

2020-10-28 Thread GitBox


StephanEwen commented on pull request #13595:
URL: https://github.com/apache/flink/pull/13595#issuecomment-717865867


   @wsry Your suggestion sounds good to me!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-19856) Add EndOfChannelRecovery rescaling epoch

2020-10-28 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-19856:
-

 Summary: Add EndOfChannelRecovery rescaling epoch
 Key: FLINK-19856
 URL: https://issues.apache.org/jira/browse/FLINK-19856
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / Network
Affects Versions: 1.12.0
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan
 Fix For: 1.12.0


This event would allow to tear down "virtual channels"     This event would 
allow to tear down "virtual channels"     used to read channel state on 
recovery with unaligned checkpoints and     rescaling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] pnowojski opened a new pull request #13820: [FLINK-19671][codestyle] Revert .editorconfig change violating our coding style

2020-10-28 Thread GitBox


pnowojski opened a new pull request #13820:
URL: https://github.com/apache/flink/pull/13820


   
https://flink.apache.org/contributing/code-style-and-quality-formatting.html#breaking-the-lines-of-too-long-statements
   
   > Rules about breaking the long lines:
   >
   > Break the argument list or chain of calls if the line exceeds limit or 
earlier if you believe that the breaking would improve the code readability
   > If you break the line then each argument/call should have a separate line, 
including the first one
   > Each new line should have one extra indentation (or two for a function 
declaration) relative to the line of the parent function name or the called 
entity
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19856) Add EndOfChannelRecovery rescaling epoch

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19856:
---
Labels: pull-request-available  (was: )

> Add EndOfChannelRecovery rescaling epoch
> 
>
> Key: FLINK-19856
> URL: https://issues.apache.org/jira/browse/FLINK-19856
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> This event would allow to tear down "virtual channels"     This event would 
> allow to tear down "virtual channels"     used to read channel state on 
> recovery with unaligned checkpoints and     rescaling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rkhachatryan opened a new pull request #13822: [FLINK-19856][network] Emit EndOfChannelRecoveryEvent

2020-10-28 Thread GitBox


rkhachatryan opened a new pull request #13822:
URL: https://github.com/apache/flink/pull/13822


   ## What is the purpose of the change
   
   Depends on #13821.
   
   The emitted event would allow to tear down "virtual channels" used to read 
channel state on recovery with unaligned checkpoints and rescaling.
   
   Upstream subpartitions are blocked upon sending this event and unblocked by 
the downstream when all channels receive it.
   
   ## Verifying this change
- Added `ChannelPersistenceITCase.testUpstreamBlocksAfterRecoveringState` 
to test emission and blocking on the upstream side
- Added `CheckpointedInputGateTest` to test alignment and unblocking on the 
downstream side
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: yes
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #13763: [FLINK-19779][avro] Remove the record_ field name prefix for Confluen…

2020-10-28 Thread GitBox


danny0405 commented on a change in pull request #13763:
URL: https://github.com/apache/flink/pull/13763#discussion_r513370899



##
File path: 
flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/typeutils/AvroSchemaConverter.java
##
@@ -297,17 +299,22 @@ private static DataType convertToDataType(Schema schema) {
 * @return Avro's {@link Schema} matching this logical type.
 */
public static Schema convertToSchema(LogicalType logicalType) {
-   return convertToSchema(logicalType, "record");
+   return convertToSchema(logicalType, "record", true);
}
 
/**
 * Converts Flink SQL {@link LogicalType} (can be nested) into an Avro 
schema.
 *
 * @param logicalType logical type
 * @param rowName the record name
+* @param top whether it is parsing the root record,
+*if it is, the logical type nullability would be 
ignored
 * @return Avro's {@link Schema} matching this logical type.
 */
-   public static Schema convertToSchema(LogicalType logicalType, String 
rowName) {
+   public static Schema convertToSchema(
+   LogicalType logicalType,
+   String rowName,
+   boolean top) {

Review comment:
   I had offline discussion with @wuchong and @dawidwys , and after some 
research, we found that a non-nullable row type is more reasonable.
   
   But because the change is huge(many codes that convert a type info to data 
type assumes nullable true before), me and @wuchong decide to change the method 
signature to `convertToSchema(RowType schema)` and add a notion to the method 
doc that the passed in `schema` must be the top level record type.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] vthinkxie commented on a change in pull request #13458: FLINK-18851 [runtime] Add checkpoint type to checkpoint history entries in Web UI

2020-10-28 Thread GitBox


vthinkxie commented on a change in pull request #13458:
URL: https://github.com/apache/flink/pull/13458#discussion_r513374506



##
File path: 
flink-runtime-web/web-dashboard/src/app/pages/job/checkpoints/detail/job-checkpoints-detail.component.ts
##
@@ -55,21 +59,38 @@ export class JobCheckpointsDetailComponent implements 
OnInit {
   }
 
   refresh() {
-this.isLoading = true;
-if (this.jobDetail && this.jobDetail.jid) {
-  this.jobService.loadCheckpointDetails(this.jobDetail.jid, 
this.checkPoint.id).subscribe(
-data => {
-  this.checkPointDetail = data;
-  this.isLoading = false;
-  this.cdr.markForCheck();
-},
-() => {
-  this.isLoading = false;
-  this.cdr.markForCheck();
-}
-  );
+  this.isLoading = true;
+  if (this.jobDetail && this.jobDetail.jid) {
+forkJoin([
+  this.jobService.loadCheckpointConfig(this.jobDetail.jid),
+  this.jobService.loadCheckpointDetails(this.jobDetail.jid, 
this.checkPoint.id)
+]).subscribe(
+  ([config, detail]) => {
+this.checkPointConfig = config;
+this.checkPointDetail = detail;
+if (this.checkPointDetail.checkpoint_type === 'CHECKPOINT') {
+  if (this.checkPointConfig.unaligned_checkpoints) {
+this.checkPointType = 'unaligned checkpoint';
+  } else {
+this.checkPointType = 'aligned checkpoint';
+  }
+} else if (this.checkPointDetail.checkpoint_type === 
'SYNC_SAVEPOINT') {
+  this.checkPointType = 'savepoint on cancel';
+} else if (this.checkPointDetail.checkpoint_type === 'SAVEPOINT') {
+  this.checkPointType = 'savepoint';
+} else {
+  this.checkPointType = '-';
+}
+this.isLoading = false;
+this.cdr.markForCheck();
+  },
+  () => {
+this.isLoading = false;
+this.cdr.markForCheck();
+  }
+);
+  }
 }

Review comment:
   There are two extra spaces in each line, the others look good to me





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-statefun] tzulitai commented on a change in pull request #168: [FLINK-19692] Write all assigned key groups into the keyed raw stream with a Header.

2020-10-28 Thread GitBox


tzulitai commented on a change in pull request #168:
URL: https://github.com/apache/flink-statefun/pull/168#discussion_r513379873



##
File path: 
statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/logger/UnboundedFeedbackLogger.java
##
@@ -99,11 +98,19 @@ private void flushToKeyedStateOutputStream() throws 
IOException {
 checkState(keyedStateOutputStream != null, "Trying to flush envelopes not 
in a logging state");
 
 final DataOutputView target = new 
DataOutputViewStreamWrapper(keyedStateOutputStream);
-for (Entry> entry : keyGroupStreams.entrySet()) 
{
-  checkpointedStreamOperations.startNewKeyGroup(keyedStateOutputStream, 
entry.getKey());
+final Iterable assignedKeyGroupIds =
+checkpointedStreamOperations.keyGroupList(keyedStateOutputStream);
+// the underlying checkpointed raw stream, requires that all key groups 
assigned
+// to this operator must be written to the underlying stream.

Review comment:
   I don’t have a strong opinion on whether or not the empty key groups 
should stay there in the long term, so fine by me to keep this as is without 
the TODO comment to revisit  
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13744: [FLINK-19766][table-runtime] Introduce File streaming compaction operators

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13744:
URL: https://github.com/apache/flink/pull/13744#issuecomment-714369975


   
   ## CI report:
   
   * 322bc2c3c11a0f5735db4a475864f894f38866da Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8466)
 
   * a01ecf0bade9f8e4a56052fb5b5d25c5034fe511 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8486)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13789: [FLINK-19727][table-runtime] Implement ParallelismProvider for sink i…

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13789:
URL: https://github.com/apache/flink/pull/13789#issuecomment-716299701


   
   ## CI report:
   
   * 934e4f0acf9044de5c6cb73ae25d094396bfc146 UNKNOWN
   * fd0aa012ed1a15803fe2ed75180d60e6c9fe8d6d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8491)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19836) Use provided Serializer for Sink Committables between operators

2020-10-28 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-19836:
-
Summary: Use provided Serializer for Sink Committables between operators  
(was: Serialize the committable by the user provided serializer during network 
shuffle)

> Use provided Serializer for Sink Committables between operators
> ---
>
> Key: FLINK-19836
> URL: https://issues.apache.org/jira/browse/FLINK-19836
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13824: [FLINK-19736] Add the SinkTransformation

2020-10-28 Thread GitBox


flinkbot commented on pull request #13824:
URL: https://github.com/apache/flink/pull/13824#issuecomment-717883808


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a10e3dcdaf7ff215ec455ce60ca7594651ad8d04 (Wed Oct 28 
11:54:03 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-19736).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19854) TableScanTest.testTemporalJoinOnUpsertSource fails

2020-10-28 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222115#comment-17222115
 ] 

Robert Metzger commented on FLINK-19854:


Pull requests are not rebased to the latest master when they are build.

> TableScanTest.testTemporalJoinOnUpsertSource fails
> --
>
> Key: FLINK-19854
> URL: https://issues.apache.org/jira/browse/FLINK-19854
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8482=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=a6e0f756-5bb9-5ea8-a468-5f60db442a29
> org.junit.ComparisonFailure: planAfter expected:<...me, 
> __TEMPORAL_JOIN_[]LEFT_KEY(currency), ...> but was:<...me, 
> __TEMPORAL_JOIN_[CONDITION_PRIMARY_KEY(currency0), 
> __TEMPORAL_JOIN_]LEFT_KEY(currency), ...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.flink.table.planner.utils.DiffRepository.assertEquals(DiffRepository.java:436)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.assertEqualsOrExpand(TableTestBase.scala:478)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:362)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.verifyPlan(TableTestBase.scala:275)
>   at 
> org.apache.flink.table.planner.plan.stream.sql.TableScanTest.testTemporalJoinOnUpsertSource(TableScanTest.scala:515)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19854) TableScanTest.testTemporalJoinOnUpsertSource fails

2020-10-28 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222116#comment-17222116
 ] 

Robert Metzger commented on FLINK-19854:


Thanks a lot for fixing this so quickly!

> TableScanTest.testTemporalJoinOnUpsertSource fails
> --
>
> Key: FLINK-19854
> URL: https://issues.apache.org/jira/browse/FLINK-19854
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8482=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=a6e0f756-5bb9-5ea8-a468-5f60db442a29
> org.junit.ComparisonFailure: planAfter expected:<...me, 
> __TEMPORAL_JOIN_[]LEFT_KEY(currency), ...> but was:<...me, 
> __TEMPORAL_JOIN_[CONDITION_PRIMARY_KEY(currency0), 
> __TEMPORAL_JOIN_]LEFT_KEY(currency), ...>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at 
> org.apache.flink.table.planner.utils.DiffRepository.assertEquals(DiffRepository.java:436)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.assertEqualsOrExpand(TableTestBase.scala:478)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.doVerifyPlan(TableTestBase.scala:362)
>   at 
> org.apache.flink.table.planner.utils.TableTestUtilBase.verifyPlan(TableTestBase.scala:275)
>   at 
> org.apache.flink.table.planner.plan.stream.sql.TableScanTest.testTemporalJoinOnUpsertSource(TableScanTest.scala:515)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18640) Fix PostgresDialect doesn't quote the identifiers

2020-10-28 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18640:

Summary: Fix PostgresDialect doesn't quote the identifiers  (was: Flink 
jdbc not support postgresql with schema)

> Fix PostgresDialect doesn't quote the identifiers
> -
>
> Key: FLINK-18640
> URL: https://issues.apache.org/jira/browse/FLINK-18640
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1, 1.10.1
>Reporter: 毛宗良
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.10.3, 1.11.3
>
>
> Flink jdbc throw exceptions when read a postgresql table with scheam, like 
> "ods.t_test". BY debugging the source code, I found a bug about dealing the 
> table name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13825: [FLINK-19836] Add SimpleVersionedSerializerTypeSerializerProxy

2020-10-28 Thread GitBox


flinkbot commented on pull request #13825:
URL: https://github.com/apache/flink/pull/13825#issuecomment-717888135


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 267ea2ff011c4dc74f3209d574d66023835f501e (Wed Oct 28 
12:03:21 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19829) should enclose column name in double quotes for PostgreSQL

2020-10-28 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19829.
---
Resolution: Duplicate

> should enclose column name in double quotes for PostgreSQL
> --
>
> Key: FLINK-19829
> URL: https://issues.apache.org/jira/browse/FLINK-19829
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: zl
>Priority: Major
>
> when I run the sql in flink:
> {code:sql}
> create table pg_sink ( 
> Name VARCHAR, 
> address VARCHAR, 
> work VARCHAR) 
> with (   
> 'connector' = 'jdbc',   
> 'url' = 'jdbc:postgresql://***:***/***',    
> 'table-name' = 'pg_sink', 
> ...
> ) 
> create table kafka_source(    
> Name VARCHAR,    
> address VARCHAR,    
> work VARCHAR
> ) with (    
> 'connector.type' = 'kafka',    
> 'format.type' = 'json',
> ...
> )
> insert into pg_sink select * from kafka_source{code}
> the following exception happens:
> {code:java}
> Caused by: org.postgresql.util.PSQLException: ERROR: column "Name" of 
> relation "pg_sink" does not exist
> ...{code}
> we can solve this problem by remove method *_quoteIdentifier_* in 
> *_PostgresDialect.java_*, then the method *_quoteIdentifier_*  in 
> _*JdbcDialect.java*_ will be used to enclose the column name in double quotes 
> for PostgreSQL. 
> could assign this issue to me ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha commented on pull request #13808: [FLINK-19834] Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread GitBox


aljoscha commented on pull request #13808:
URL: https://github.com/apache/flink/pull/13808#issuecomment-717817391


   I merged this, but I renamed all the builder methods to `setFoo()` instead 
of `addFoo()` because they are setting a thing and not adding to a list of 
things.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19834) Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-19834.

Resolution: Implemented

master: 0a983188cdcde0c8a5f024e59a3fb5ddc65f16b8

> Make the TestSink reusable in all the sink related tests.
> -
>
> Key: FLINK-19834
> URL: https://issues.apache.org/jira/browse/FLINK-19834
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Tests
>Affects Versions: 1.12.0
>Reporter: Guowei Ma
>Assignee: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] aljoscha closed pull request #13808: [FLINK-19834] Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread GitBox


aljoscha closed pull request #13808:
URL: https://github.com/apache/flink/pull/13808


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol commented on pull request #13796: [FLINK-19810][CI] Automatically run a basic NOTICE file check on CI

2020-10-28 Thread GitBox


zentol commented on pull request #13796:
URL: https://github.com/apache/flink/pull/13796#issuecomment-717821843


   I'm still not a fan of automating the verification; people _will_ see the 
tool as the source of truth (which it isn't, as you pointed out), and any 
attempt at automating the generation can now be shut down with "well we have a 
tool to check the correctness, so let's not spend time on it".
   
   That said, it is obvious that it does find issues; particularly subtle ones 
that can _easily_ be missed during a manual review, `com` vs `org` is subtle AF 
and not really the focus of such a review (as it is more concerned with 
anything being listed at all and the version being correct).
   
   So I am, begrudgingly, fine with it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] guoweiM commented on pull request #13808: [FLINK-19834] Make the TestSink reusable in all the sink related tests.

2020-10-28 Thread GitBox


guoweiM commented on pull request #13808:
URL: https://github.com/apache/flink/pull/13808#issuecomment-717821255


   > +1 for merging from me as soon as AZP given green, but I think @aljoscha 
's comments are not in the branch.
   
   
   
   > I merged this, but I renamed all the builder methods to `setFoo()` instead 
of `addFoo()` because they are setting a thing and not adding to a list of 
things.
   
   Sorry I miss this stuff 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13605: [FLINK-19599][table] Introduce Filesystem format factories to integrate new FileSource to table

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13605:
URL: https://github.com/apache/flink/pull/13605#issuecomment-707521684


   
   ## CI report:
   
   * a9df8a1384eb6306656b4fd952edd4be5d7a857d UNKNOWN
   * fa0dc913f56aa2567c30073c044f688f9ed74fee Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8485)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13620: [hotfix][doc] the first FSDataOutputStream should be FSDataInputStream in FileSystem#L178

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13620:
URL: https://github.com/apache/flink/pull/13620#issuecomment-707904494


   
   ## CI report:
   
   * 16f95599c033e72ce486b06dd3d527abd656b1a9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7546)
 
   * 4290129d21b0600e5d17b1c9db54c9416a861b2c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13812: [FLINK-19674][docs-zh] Translate "Docker" of "Clusters & Depolyment" page into Chinese

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13812:
URL: https://github.com/apache/flink/pull/13812#issuecomment-717687160


   
   ## CI report:
   
   * 74a934ea4dd99e6bc44ac84a9f314468faf8376b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8470)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] liuyongvs closed pull request #13818: [FLINK-19848][docs] fix docs bug of build flink.

2020-10-28 Thread GitBox


liuyongvs closed pull request #13818:
URL: https://github.com/apache/flink/pull/13818


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13818: [FLINK-19848][docs] fix docs bug of build flink.

2020-10-28 Thread GitBox


flinkbot commented on pull request #13818:
URL: https://github.com/apache/flink/pull/13818#issuecomment-717833523


   
   ## CI report:
   
   * 80c826228cfa96218b23bd087f4047673e91f734 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19837) Don't emit intermediate watermarks from watermark operators in BATCH execution mode

2020-10-28 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-19837:


Assignee: Nicholas Jiang

> Don't emit intermediate watermarks from watermark operators in BATCH 
> execution mode
> ---
>
> Key: FLINK-19837
> URL: https://issues.apache.org/jira/browse/FLINK-19837
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Assignee: Nicholas Jiang
>Priority: Major
>
> Currently, both sources and watermark/timestamp operators can emit watermarks 
> that we don't really need. We only need a final watermark in BATCH execution 
> mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19829) should enclose column name in double quotes for PostgreSQL

2020-10-28 Thread zl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222101#comment-17222101
 ] 

zl commented on FLINK-19829:


Hi [~jark], can you take a look at this issue ?

> should enclose column name in double quotes for PostgreSQL
> --
>
> Key: FLINK-19829
> URL: https://issues.apache.org/jira/browse/FLINK-19829
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: zl
>Priority: Major
>
> when I run the sql in flink:
> {code:sql}
> create table pg_sink ( 
> Name VARCHAR, 
> address VARCHAR, 
> work VARCHAR) 
> with (   
> 'connector' = 'jdbc',   
> 'url' = 'jdbc:postgresql://***:***/***',    
> 'table-name' = 'pg_sink', 
> ...
> ) 
> create table kafka_source(    
> Name VARCHAR,    
> address VARCHAR,    
> work VARCHAR
> ) with (    
> 'connector.type' = 'kafka',    
> 'format.type' = 'json',
> ...
> )
> insert into pg_sink select * from kafka_source{code}
> the following exception happens:
> {code:java}
> Caused by: org.postgresql.util.PSQLException: ERROR: column "Name" of 
> relation "pg_sink" does not exist
> ...{code}
> we can solve this problem by remove method *_quoteIdentifier_* in 
> *_PostgresDialect.java_*, then the method *_quoteIdentifier_*  in 
> _*JdbcDialect.java*_ will be used to enclose the column name in double quotes 
> for PostgreSQL. 
> could assign this issue to me ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13820: [FLINK-19671][codestyle] Revert .editorconfig change violating our coding style

2020-10-28 Thread GitBox


flinkbot commented on pull request #13820:
URL: https://github.com/apache/flink/pull/13820#issuecomment-717870794


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c0e721f67b0b4b21b5080900954df10a05890d87 (Wed Oct 28 
11:25:36 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan opened a new pull request #13821: [FLINK-19855][network] Specify channel AND gate in resumeConsumption()

2020-10-28 Thread GitBox


rkhachatryan opened a new pull request #13821:
URL: https://github.com/apache/flink/pull/13821


   ## What is the purpose of the change
   
   Replace `channelIndex` with `ChannelInfo` in resumeConsumption methods of 
`InputGate` and `CheckpointableInput`.
   
   Given `channelIndex` only, `UnionInputGate` has to guess which wrapped 
`inputGate` this channel belongs to.
   For that, `UnionInputGate` expects `channelIndex` with an `inputGate` offset.
   However, some clients (e.g. `AlignedController`) pass channel index without 
offset.
   
   ## Verifying this change
   
   TBD
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19855) Incompatible semantics of channelIndex in UnionInputGate.resumeConsumption and its clients

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19855:
---
Labels: pull-request-available  (was: )

> Incompatible semantics of channelIndex in UnionInputGate.resumeConsumption 
> and its clients
> --
>
> Key: FLINK-19855
> URL: https://issues.apache.org/jira/browse/FLINK-19855
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Given only channelIndex for resumeConsumption, UnionInputGate has to guess 
> which wrapped input gate this channel belongs to.
>  For that, UnionInputGate expects channelIndex with an inputGate offset.
> However, some clients (e.g. AlignedController) pass channel index without 
> offset.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on pull request #13823: [hot-fix][table-planner-blink] Fix TableScanTest which used temporal …

2020-10-28 Thread GitBox


leonardBang commented on pull request #13823:
URL: https://github.com/apache/flink/pull/13823#issuecomment-717878191


   cc @wuchong Could you have a quick look ? thanks 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #13823: [hot-fix][table-planner-blink] Fix TableScanTest which used temporal …

2020-10-28 Thread GitBox


wuchong merged pull request #13823:
URL: https://github.com/apache/flink/pull/13823


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19836) Use provided Serializer for Sink Committables sent between operators

2020-10-28 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek updated FLINK-19836:
-
Summary: Use provided Serializer for Sink Committables sent between 
operators  (was: Use provided Serializer for Sink Committables between 
operators)

> Use provided Serializer for Sink Committables sent between operators
> 
>
> Key: FLINK-19836
> URL: https://issues.apache.org/jira/browse/FLINK-19836
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19736) Implement the `SinkTransformation`

2020-10-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19736:
---
Labels: pull-request-available  (was: )

> Implement the `SinkTransformation`
> --
>
> Key: FLINK-19736
> URL: https://issues.apache.org/jira/browse/FLINK-19736
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19836) Use provided Serializer for Sink Committables sent between operators

2020-10-28 Thread Aljoscha Krettek (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek reassigned FLINK-19836:


Assignee: Aljoscha Krettek

> Use provided Serializer for Sink Committables sent between operators
> 
>
> Key: FLINK-19836
> URL: https://issues.apache.org/jira/browse/FLINK-19836
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Guowei Ma
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19829) should enclose column name in double quotes for PostgreSQL

2020-10-28 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222117#comment-17222117
 ] 

Jark Wu commented on FLINK-19829:
-

Hi [~Leo Zhou], thanks for reporting this issue. This is a known issue.
Adding back the quotes around identifiers in PostgresDialect will have other 
problems (e.g. can't distinguish schema part). 
Let's continue the discussion in FLINK-18640. 

> should enclose column name in double quotes for PostgreSQL
> --
>
> Key: FLINK-19829
> URL: https://issues.apache.org/jira/browse/FLINK-19829
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Reporter: zl
>Priority: Major
>
> when I run the sql in flink:
> {code:sql}
> create table pg_sink ( 
> Name VARCHAR, 
> address VARCHAR, 
> work VARCHAR) 
> with (   
> 'connector' = 'jdbc',   
> 'url' = 'jdbc:postgresql://***:***/***',    
> 'table-name' = 'pg_sink', 
> ...
> ) 
> create table kafka_source(    
> Name VARCHAR,    
> address VARCHAR,    
> work VARCHAR
> ) with (    
> 'connector.type' = 'kafka',    
> 'format.type' = 'json',
> ...
> )
> insert into pg_sink select * from kafka_source{code}
> the following exception happens:
> {code:java}
> Caused by: org.postgresql.util.PSQLException: ERROR: column "Name" of 
> relation "pg_sink" does not exist
> ...{code}
> we can solve this problem by remove method *_quoteIdentifier_* in 
> *_PostgresDialect.java_*, then the method *_quoteIdentifier_*  in 
> _*JdbcDialect.java*_ will be used to enclose the column name in double quotes 
> for PostgreSQL. 
> could assign this issue to me ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13817: [HotFix][docs] Fix broken link of docker.md

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13817:
URL: https://github.com/apache/flink/pull/13817#issuecomment-717771629


   
   ## CI report:
   
   * f7fcad2291f072819e6576b291327abd16947445 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-13598) frocksdb doesn't have arm release

2020-10-28 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222061#comment-17222061
 ] 

Yun Tang commented on FLINK-13598:
--

[~wangxiyuan] we're planing to upgrade RocksDB version in 
https://issues.apache.org/jira/browse/FLINK-14482 which is based on 
RocksDB-6.11.6. Did you have any resources to build ARM releases of 
https://github.com/ververica/frocksdb ?

> frocksdb doesn't have arm release 
> --
>
> Key: FLINK-13598
> URL: https://issues.apache.org/jira/browse/FLINK-13598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0, 2.0.0
>Reporter: wangxiyuan
>Priority: Major
> Attachments: image-2020-08-20-09-22-24-021.png
>
>
> Flink now uses frocksdb which forks from rocksdb  for module 
> *flink-statebackend-rocksdb*. It doesn't contain arm release.
> Now rocksdb supports ARM from 
> [v6.2.2|https://search.maven.org/artifact/org.rocksdb/rocksdbjni/6.2.2/jar]
> Can frocksdb release an ARM package as well?
> Or AFAK, Since there were some bugs for rocksdb in the past, so that Flink 
> didn't use it directly. Have the bug been solved in rocksdb already? Can 
> Flink re-use rocksdb again now?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13813: [FLINK-19491][avro] AvroSerializerSnapshot cannot handle large schema

2020-10-28 Thread GitBox


flinkbot edited a comment on pull request #13813:
URL: https://github.com/apache/flink/pull/13813#issuecomment-717706259


   
   ## CI report:
   
   * 61014eba30b67fc21a965f876d9484dd3873e14d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8459)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] kl0u commented on a change in pull request #13819: [hotfix][docs]fix some mistakes in StreamingFileSink

2020-10-28 Thread GitBox


kl0u commented on a change in pull request #13819:
URL: https://github.com/apache/flink/pull/13819#discussion_r513331116



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/sink/filesystem/StreamingFileSink.java
##
@@ -137,12 +137,12 @@ protected StreamingFileSink(
}
 
/**
-* Creates the builder for a {@link StreamingFileSink} with 
row-encoding format.
+* Creates the builder for a {@link StreamingFileSink} with 
bulk-factory format.

Review comment:
   "bulk-factory" -> "bulk-encoding"





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19847) Can we create a fast support on the Nested table join?

2020-10-28 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17222084#comment-17222084
 ] 

Jark Wu commented on FLINK-19847:
-

Yes, it is recommended to specify multiple flat fields rather than array type. 

> Can we create a fast support on the Nested table join?
> --
>
> Key: FLINK-19847
> URL: https://issues.apache.org/jira/browse/FLINK-19847
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.11.1
>Reporter: xiaogang zhou
>Priority: Major
>
> In CommonLookupJoin, one TODO is 
> support nested lookup keys in the future,
> // currently we only support top-level lookup keys
>  
> can we create a fast support on the Array join? thx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Sxnan commented on a change in pull request #13073: [FLINK-18820] SourceOperator should send MAX_WATERMARK to downstream operator when closed

2020-10-28 Thread GitBox


Sxnan commented on a change in pull request #13073:
URL: https://github.com/apache/flink/pull/13073#discussion_r513334483



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/SourceOperator.java
##
@@ -163,6 +164,7 @@ public void sendSourceEventToCoordinator(SourceEvent event) 
{
@Override
public void close() throws Exception {
eventTimeLogic.stopPeriodicWatermarkEmits();
+   currentMainOutput.emitWatermark(Watermark.MAX_WATERMARK);

Review comment:
   Thanks for pointing out the problem and the detailed explaination. I 
have made the change at 406f376, please take a look. 
   I also think the the SourceoperatorStreamTask should override the 
`advanceToEndOfEventTime`. If @aljoscha can confirm that, I'd love to made that 
change in this pr as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   >