[GitHub] [flink] flinkbot commented on issue #11118: [FLINK-16072][python] Optimize the performance of the write/read null mask

2020-02-17 Thread GitBox
flinkbot commented on issue #8: [FLINK-16072][python] Optimize the 
performance of the write/read null mask
URL: https://github.com/apache/flink/pull/8#issuecomment-587328532
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3bd212965eac342eb67cb216d47128405462889f (Tue Feb 18 
07:57:27 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16072).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16072) Optimize the performance of the write/read null mask in RowCoder

2020-02-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16072:
---
Labels: pull-request-available  (was: )

> Optimize the performance of the write/read null mask in RowCoder
> 
>
> Key: FLINK-16072
> URL: https://issues.apache.org/jira/browse/FLINK-16072
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Optimizing the write/read null mask in RowCoder will gain some performance 
> improvements in python udf.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #11118: [FLINK-16072][python] Optimize the performance of the write/read null mask

2020-02-17 Thread GitBox
HuangXingBo opened a new pull request #8: [FLINK-16072][python] Optimize 
the performance of the write/read null mask
URL: https://github.com/apache/flink/pull/8
 
 
   ## What is the purpose of the change
   
   *This pr optimize the performance of the write/read null mask*
   
   
   ## Brief change log
   
 - *The decode/encode/write_null_mask/read_mask in FlattenRowCoder*
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *It is a perforce improvement without extra function test*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16115) Aliyun oss filesystem could not work with plugin mechanism

2020-02-17 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038861#comment-17038861
 ] 

Yang Wang commented on FLINK-16115:
---

[~liyu] Thanks for your help.

I have attached a PR to fix this issue. [~AHeise] could you please take a look?

> Aliyun oss filesystem could not work with plugin mechanism
> --
>
> Key: FLINK-16115
> URL: https://issues.apache.org/jira/browse/FLINK-16115
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.10.0
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From release-1.9, Flink suggest users to load all filesystem with plugin, 
> including oss. However, it could not work for oss filesystem. The root cause 
> is it does not shade the {{org.apache.flink.runtime.fs.hdfs}} and 
> {{org.apache.flink.runtime.util}}. So they will always be loaded by system 
> classloader and throw the following exceptions.
>  
> {code:java}
> 2020-02-17 17:28:47,247 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not 
> start cluster entrypoint StandaloneSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> at 
> org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:64)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.(Lorg/apache/flink/fs/shaded/hadoop3/org/apache/hadoop/fs/FileSystem;)V
> at 
> org.apache.flink.fs.osshadoop.OSSFileSystemFactory.create(OSSFileSystemFactory.java:85)
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:61)
> at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:441)
> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> ... 2 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 commented on issue #11117: [FLINK-16115][filesystem] Make aliyun oss filesystem could work with plugin mechanism

2020-02-17 Thread GitBox
wangyang0918 commented on issue #7: [FLINK-16115][filesystem] Make aliyun 
oss filesystem could work with plugin mechanism
URL: https://github.com/apache/flink/pull/7#issuecomment-587326706
 
 
   @AHeise I think you are familiar with plugin and shading. Could you please 
take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] igalshilman commented on a change in pull request #25: [FLINK-16106] Add PersistedList state primitive

2020-02-17 Thread GitBox
igalshilman commented on a change in pull request #25: [FLINK-16106] Add 
PersistedList state primitive
URL: https://github.com/apache/flink-statefun/pull/25#discussion_r380502824
 
 

 ##
 File path: 
statefun-sdk/src/main/java/org/apache/flink/statefun/sdk/state/PersistedAppendingBuffer.java
 ##
 @@ -0,0 +1,189 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.statefun.sdk.state;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Objects;
+import javax.annotation.Nullable;
+import org.apache.flink.statefun.sdk.StatefulFunction;
+import org.apache.flink.statefun.sdk.annotations.ForRuntime;
+import org.apache.flink.statefun.sdk.annotations.Persisted;
+
+/**
+ * A {@link PersistedAppendingBuffer} is an append-only buffer registered 
within {@link
+ * StatefulFunction}s and is persisted and maintained by the system for 
fault-tolerance. Persisted
+ * elements in the buffer may only be updated with bulk replacements.
+ *
+ * Created persisted buffers must be registered by using the {@link 
Persisted} annotation. Please
+ * see the class-level Javadoc of {@link StatefulFunction} for an example on 
how to do that.
+ *
+ * @see StatefulFunction
+ * @param  type of the list elements.
+ */
+public final class PersistedAppendingBuffer {
+  private final String name;
+  private final Class elementType;
+  private AppendingBufferAccessor accessor;
+
+  private PersistedAppendingBuffer(
+  String name, Class elementType, AppendingBufferAccessor accessor) {
+this.name = Objects.requireNonNull(name);
+this.elementType = Objects.requireNonNull(elementType);
+this.accessor = Objects.requireNonNull(accessor);
+  }
+
+  /**
+   * Creates a {@link PersistedAppendingBuffer} instance that may be used to 
access persisted state
+   * managed by the system. Access to the persisted buffer is identified by an 
unique name and type
+   * of the elements. These may not change across multiple executions of the 
application.
+   *
+   * @param name the unique name of the persisted buffer state
+   * @param elementType the type of the elements of this {@code 
PersistedAppendingBuffer}.
+   * @param  the type of the elements.
+   * @return a {@code PersistedAppendingBuffer} instance.
+   */
+  public static  PersistedAppendingBuffer of(String name, Class 
elementType) {
+return new PersistedAppendingBuffer<>(name, elementType, new 
NonFaultTolerantAccessor<>());
+  }
+
+  /**
+   * Returns the unique name of the persisted buffer.
+   *
+   * @return unique name of the persisted buffer.
+   */
+  public String name() {
+return name;
+  }
+
+  /**
+   * Returns the type of the persisted buffer elements.
+   *
+   * @return the type of the persisted buffer elements.
+   */
+  public Class elementType() {
+return elementType;
+  }
+
+  /**
+   * Appends an element to the persisted buffer.
+   *
+   * If {@code null} is passed in, then this method has no effect and the 
persisted buffer
+   * remains the same.
+   *
+   * @param element the element to add to the persisted buffer.
+   */
+  public void append(@Nullable E element) {
+if (element != null) {
+  accessor.append(element);
+}
+  }
+
+  /**
+   * Adds all elements of a list to the persisted buffer.
+   *
+   * If {@code null} or an empty list is passed in, then this method has no 
effect and the
+   * persisted buffer remains the same.
+   *
+   * @param elements a list of elements to add to the persisted buffer.
+   */
+  public void appendAll(@Nullable List elements) {
+if (elements != null && !elements.isEmpty()) {
+  accessor.appendAll(elements);
+}
+  }
+
+  /**
+   * Replace the elements in the persisted buffer with the provided list of 
elements.
+   *
+   * If an empty list or {@code null} is passed in, this method will have 
the same effect as
+   * {@link #clear()}.
+   *
+   * @param elements list of elements to replace the elements in the persisted 
buffer with.
+   */
+  public void replaceWith(@Nullable List elements) {
+if (elements != null && !elements.isEmpty()) {
+  accessor.replaceWith(elements);
+} else {
+  accessor.clear();
+}
+  }
+
+  /**
+   * Gets the elements 

[jira] [Assigned] (FLINK-16127) Translate "Fault Tolerance Guarantees" page of connectors into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16127:
---

Assignee: Xuannan Su

> Translate "Fault Tolerance Guarantees" page of connectors into Chinese
> --
>
> Key: FLINK-16127
> URL: https://issues.apache.org/jira/browse/FLINK-16127
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Xuannan Su
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/guarantees.html
> The markdown file is located in flink/docs/dev/connectors/guarantees.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16130) Translate "Common Configurations" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16130:
---

Assignee: Qingsheng Ren

> Translate "Common Configurations" page of "File Systems" into Chinese 
> --
>
> Key: FLINK-16130
> URL: https://issues.apache.org/jira/browse/FLINK-16130
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Qingsheng Ren
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/common.html
> The markdown file is located in flink/docs/ops/filesystems/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16129) Translate "Overview" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16129:
---

Assignee: Qingsheng Ren

> Translate "Overview" page of "File Systems" into Chinese 
> -
>
> Key: FLINK-16129
> URL: https://issues.apache.org/jira/browse/FLINK-16129
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Qingsheng Ren
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/
> The markdown file is located in flink/docs/ops/filesystems/index.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16128) Translate "Google Cloud PubSub" page into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16128:
---

Assignee: Xuannan Su

> Translate "Google Cloud PubSub" page into Chinese
> -
>
> Key: FLINK-16128
> URL: https://issues.apache.org/jira/browse/FLINK-16128
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Xuannan Su
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/pubsub.html
> The markdown file is located in flink/docs/dev/connectors/pubsub.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16132) Translate "Aliyun OSS" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16132:
---

Assignee: Qingsheng Ren

> Translate "Aliyun OSS" page of "File Systems" into Chinese 
> ---
>
> Key: FLINK-16132
> URL: https://issues.apache.org/jira/browse/FLINK-16132
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Qingsheng Ren
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/oss.html
> The markdown file is located in flink/docs/ops/filesystems/oss.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16133) Translate "Azure Blob Storage" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16133:
---

Assignee: Qingsheng Ren

> Translate "Azure Blob Storage" page of "File Systems" into Chinese 
> ---
>
> Key: FLINK-16133
> URL: https://issues.apache.org/jira/browse/FLINK-16133
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Qingsheng Ren
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/azure.html
> The markdown file is located in flink/docs/ops/filesystems/azure.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16131) Translate "Amazon S3" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16131:
---

Assignee: Qingsheng Ren

> Translate "Amazon S3" page of "File Systems" into Chinese 
> --
>
> Key: FLINK-16131
> URL: https://issues.apache.org/jira/browse/FLINK-16131
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Qingsheng Ren
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/s3.html
> The markdown file is located in flink/docs/ops/filesystems/s3.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#issuecomment-584007380
 
 
   
   ## CI report:
   
   * 86c4939042e8af2da1ac1e5900225f4f0310fa04 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148149333) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4991)
 
   * 18beb9a3c1ed78a6cbc1bc5ba9f96b51bbf8eeea Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148181501) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5000)
 
   * a9125de0859c43262d41c4cecee53ceeb807cf27 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148340324) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5041)
 
   * 52bc7dcf579ef6e5e466ec93451cbf56451b1d41 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148547890) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5093)
 
   * a307994428657e1025786975b7cf11544afa3ab8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148567327) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5104)
 
   * 4749ac089576b36668e412acdbe56f55b4ffbdc6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148576458) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5106)
 
   * 8c7c673b8bf46297d499db5725c127986533c8ff Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148711047) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5119)
 
   * b1f4018140c5b3b881de6da6d5f69422a4b09831 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149097149) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5200)
 
   * 342826120f528c06b9b811e4218a33ee3e4ba2d6 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149392860) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5263)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python TableFunction

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#issuecomment-584123392
 
 
   
   ## CI report:
   
   * 68b071496379b090212eb961b2026578ee2c64e3 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148190648) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5010)
 
   * 9c17e490eca0cba3ca56c8094042b7e97bb217a4 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148726395) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5121)
 
   * 67f8b75918a21c604569ef75f99663c08a26fcdf Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149164808) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5220)
 
   * f0fbdc6d21c7c57c51366e53a149dc8b8db118f8 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149250281) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5230)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16128) Translate "Google Cloud PubSub" page into Chinese

2020-02-17 Thread Xuannan Su (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038856#comment-17038856
 ] 

Xuannan Su commented on FLINK-16128:


Hi [~jark], I'd like to take this issue. Could you assign it to me?

> Translate "Google Cloud PubSub" page into Chinese
> -
>
> Key: FLINK-16128
> URL: https://issues.apache.org/jira/browse/FLINK-16128
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/pubsub.html
> The markdown file is located in flink/docs/dev/connectors/pubsub.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11117: [FLINK-16115][filesystem] Make aliyun oss filesystem could work with plugin mechanism

2020-02-17 Thread GitBox
flinkbot commented on issue #7: [FLINK-16115][filesystem] Make aliyun oss 
filesystem could work with plugin mechanism
URL: https://github.com/apache/flink/pull/7#issuecomment-587322531
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 998beacb3918bb269040c74b52022e8db4599956 (Tue Feb 18 
07:38:59 UTC 2020)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16115) Aliyun oss filesystem could not work with plugin mechanism

2020-02-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16115:
---
Labels: pull-request-available  (was: )

> Aliyun oss filesystem could not work with plugin mechanism
> --
>
> Key: FLINK-16115
> URL: https://issues.apache.org/jira/browse/FLINK-16115
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.10.0
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>
> From release-1.9, Flink suggest users to load all filesystem with plugin, 
> including oss. However, it could not work for oss filesystem. The root cause 
> is it does not shade the {{org.apache.flink.runtime.fs.hdfs}} and 
> {{org.apache.flink.runtime.util}}. So they will always be loaded by system 
> classloader and throw the following exceptions.
>  
> {code:java}
> 2020-02-17 17:28:47,247 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not 
> start cluster entrypoint StandaloneSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> at 
> org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:64)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.(Lorg/apache/flink/fs/shaded/hadoop3/org/apache/hadoop/fs/FileSystem;)V
> at 
> org.apache.flink.fs.osshadoop.OSSFileSystemFactory.create(OSSFileSystemFactory.java:85)
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:61)
> at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:441)
> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> ... 2 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 opened a new pull request #11117: [FLINK-16115][filesystem] Make aliyun oss filesystem could work with plugin mechanism

2020-02-17 Thread GitBox
wangyang0918 opened a new pull request #7: [FLINK-16115][filesystem] Make 
aliyun oss filesystem could work with plugin mechanism
URL: https://github.com/apache/flink/pull/7
 
 
   
   
   ## What is the purpose of the change
   
   From release-1.9, Flink suggest users to load all filesystem with plugin, 
including oss. However, it could not work for oss filesystem. The root cause is 
it does not shade the org.apache.flink.runtime.fs.hdfs and 
org.apache.flink.runtime.util. So they will always be loaded by system 
classloader.
   What we need to do is shading the two packages 
`org.apache.flink.runtime.fs.hdfs` and `org.apache.flink.runtime.util`, just 
like s3 and azure.
   
   ## Brief change log
   
   * shading two packages `org.apache.flink.runtime.fs.hdfs` and 
`org.apache.flink.runtime.util`
   
   
   ## Verifying this change
   
   * Configure the HA storage to oss and manually run a standalone Flink 
cluster 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16132) Translate "Aliyun OSS" page of "File Systems" into Chinese

2020-02-17 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038851#comment-17038851
 ] 

Qingsheng Ren commented on FLINK-16132:
---

Thanks [~jark]. I'd like to work on this issue.

> Translate "Aliyun OSS" page of "File Systems" into Chinese 
> ---
>
> Key: FLINK-16132
> URL: https://issues.apache.org/jira/browse/FLINK-16132
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/oss.html
> The markdown file is located in flink/docs/ops/filesystems/oss.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16133) Translate "Azure Blob Storage" page of "File Systems" into Chinese

2020-02-17 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038852#comment-17038852
 ] 

Qingsheng Ren commented on FLINK-16133:
---

Thanks [~jark]. I'd like to work on this issue.

> Translate "Azure Blob Storage" page of "File Systems" into Chinese 
> ---
>
> Key: FLINK-16133
> URL: https://issues.apache.org/jira/browse/FLINK-16133
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/azure.html
> The markdown file is located in flink/docs/ops/filesystems/azure.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16131) Translate "Amazon S3" page of "File Systems" into Chinese

2020-02-17 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038850#comment-17038850
 ] 

Qingsheng Ren commented on FLINK-16131:
---

Thanks [~jark]. I'd like to work on this issue.

> Translate "Amazon S3" page of "File Systems" into Chinese 
> --
>
> Key: FLINK-16131
> URL: https://issues.apache.org/jira/browse/FLINK-16131
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/s3.html
> The markdown file is located in flink/docs/ops/filesystems/s3.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16130) Translate "Common Configurations" page of "File Systems" into Chinese

2020-02-17 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038848#comment-17038848
 ] 

Qingsheng Ren commented on FLINK-16130:
---

Thanks [~jark]. I'd like to work on this.

> Translate "Common Configurations" page of "File Systems" into Chinese 
> --
>
> Key: FLINK-16130
> URL: https://issues.apache.org/jira/browse/FLINK-16130
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/common.html
> The markdown file is located in flink/docs/ops/filesystems/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16127) Translate "Fault Tolerance Guarantees" page of connectors into Chinese

2020-02-17 Thread Xuannan Su (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038849#comment-17038849
 ] 

Xuannan Su commented on FLINK-16127:


Hi [~jark], I'd like to take this issue. Could you assign it to me?

> Translate "Fault Tolerance Guarantees" page of connectors into Chinese
> --
>
> Key: FLINK-16127
> URL: https://issues.apache.org/jira/browse/FLINK-16127
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/guarantees.html
> The markdown file is located in flink/docs/dev/connectors/guarantees.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16129) Translate "Overview" page of "File Systems" into Chinese

2020-02-17 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038846#comment-17038846
 ] 

Qingsheng Ren commented on FLINK-16129:
---

Thanks [~jark]. I'd like to work on this.

> Translate "Overview" page of "File Systems" into Chinese 
> -
>
> Key: FLINK-16129
> URL: https://issues.apache.org/jira/browse/FLINK-16129
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/
> The markdown file is located in flink/docs/ops/filesystems/index.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16126) Translate all connector related pages into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16126:
---

Assignee: Jiangjie Qin

> Translate all connector related pages into Chinese
> --
>
> Key: FLINK-16126
> URL: https://issues.apache.org/jira/browse/FLINK-16126
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Jiangjie Qin
>Priority: Major
> Fix For: 1.11.0
>
>
> Translate all connector related pages into Chinese, including pages under 
> `docs/dev/connectors/` and `docs/ops/filesystems/`. 
> This is an umbrella issue to track all relative pages.
> Connector pages under Batch API is not in the plan, because they will be 
> dropped in the future. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14881) Upgrade AWS SDK to support "IAM Roles for Service Accounts" in AWS EKS

2020-02-17 Thread Rafi Aroch (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038842#comment-17038842
 ] 

Rafi Aroch commented on FLINK-14881:


 [~sewen] are there any concerns in adding this?

If there are no reservations, can you please assign me to this issue?

> Upgrade AWS SDK to support "IAM Roles for Service Accounts" in AWS EKS
> --
>
> Key: FLINK-14881
> URL: https://issues.apache.org/jira/browse/FLINK-14881
> Project: Flink
>  Issue Type: Improvement
>  Components: FileSystems
>Reporter: Vincent Chenal
>Priority: Major
>
> In order to use IAM Roles for Service Accounts in AWS EKS, the minimum 
> required version of the AWS SDK  is 1.11.625.
> [https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16133) Translate "Azure Blob Storage" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16133:
---

 Summary: Translate "Azure Blob Storage" page of "File Systems" 
into Chinese 
 Key: FLINK-16133
 URL: https://issues.apache.org/jira/browse/FLINK-16133
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/azure.html

The markdown file is located in flink/docs/ops/filesystems/azure.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16132) Translate "Aliyun OSS" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16132:
---

 Summary: Translate "Aliyun OSS" page of "File Systems" into 
Chinese 
 Key: FLINK-16132
 URL: https://issues.apache.org/jira/browse/FLINK-16132
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/oss.html

The markdown file is located in flink/docs/ops/filesystems/oss.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16130) Translate "Common Configurations" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16130:
---

 Summary: Translate "Common Configurations" page of "File Systems" 
into Chinese 
 Key: FLINK-16130
 URL: https://issues.apache.org/jira/browse/FLINK-16130
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/common.html

The markdown file is located in flink/docs/ops/filesystems/common.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16131) Translate "Amazon S3" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16131:
---

 Summary: Translate "Amazon S3" page of "File Systems" into Chinese 
 Key: FLINK-16131
 URL: https://issues.apache.org/jira/browse/FLINK-16131
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/s3.html

The markdown file is located in flink/docs/ops/filesystems/s3.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16129) Translate "Overview" page of "File Systems" into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16129:
---

 Summary: Translate "Overview" page of "File Systems" into Chinese 
 Key: FLINK-16129
 URL: https://issues.apache.org/jira/browse/FLINK-16129
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/ops/filesystems/

The markdown file is located in flink/docs/ops/filesystems/index.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16128) Translate "Google Cloud PubSub" page into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16128:
---

 Summary: Translate "Google Cloud PubSub" page into Chinese
 Key: FLINK-16128
 URL: https://issues.apache.org/jira/browse/FLINK-16128
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/pubsub.html
The markdown file is located in flink/docs/dev/connectors/pubsub.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16127) Translate "Fault Tolerance Guarantees" page of connectors into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16127:
---

 Summary: Translate "Fault Tolerance Guarantees" page of connectors 
into Chinese
 Key: FLINK-16127
 URL: https://issues.apache.org/jira/browse/FLINK-16127
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/connectors/guarantees.html
The markdown file is located in flink/docs/dev/connectors/guarantees.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-16068) table with keyword-escaped columns and computed_column_expression columns

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16068.
-
Resolution: Fixed

Fixed in
 - master(1.11.0): 00649491e2b05bb1ade46355b3002a5dad75c7eb
 - 1.10.1: b3c035da1f0f7ef88233f4957bcf9c2c1f06310f

> table with keyword-escaped columns and computed_column_expression columns
> -
>
> Key: FLINK-16068
> URL: https://issues.apache.org/jira/browse/FLINK-16068
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: pangliang
>Assignee: Benchao Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I use sql-client to create a table with keyword-escaped column and 
> computed_column_expression column, like this:
> {code:java}
> CREATE TABLE source_kafka (
> log STRING,
> `time` BIGINT,
> pt as proctime()
> ) WITH (
>   'connector.type' = 'kafka',   
>   'connector.version' = 'universal',
>   'connector.topic' = 'k8s-logs',
>   'connector.startup-mode' = 'latest-offset',
>   'connector.properties.zookeeper.connect' = 
> 'zk-1.zk:2181,zk-2.zk:2181,zk-3.zk:2181/kafka',
>   'connector.properties.bootstrap.servers' = 'kafka.default:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'format.type'='json',
>   'format.fail-on-missing-field' = 'true',
>   'update-mode' = 'append'
> );
> {code}
> Then I simply used it :
> {code:java}
> SELECT * from source_kafka limit 10;{code}
> got an exception:
> {code:java}
> java.io.IOException: Fail to run stream sql job
>   at 
> org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:164)
>   at 
> org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callSelect(FlinkStreamSqlInterpreter.java:108)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:203)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151)
>   at 
> org.apache.zeppelin.flink.FlinkSqlInterrpeter.interpret(FlinkSqlInterrpeter.java:104)
>   at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:103)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:676)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:569)
>   at org.apache.zeppelin.scheduler.Job.run(Job.java:172)
>   at 
> org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:121)
>   at 
> org.apache.zeppelin.scheduler.ParallelScheduler.lambda$runJobInScheduler$0(ParallelScheduler.java:39)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.table.api.SqlParserException: SQL parse failed. 
> Encountered "time" at line 1, column 12.
> Was expecting one of:
> "ABS" ...
> "ARRAY" ...
> "AVG" ...
> "CARDINALITY" ...
> "CASE" ...
> "CAST" ...
> "CEIL" ...
> "CEILING" ...
> ..
> 
>   at 
> org.apache.flink.table.planner.calcite.CalciteParser.parse(CalciteParser.java:50)
>   at 
> org.apache.flink.table.planner.calcite.SqlExprToRexConverterImpl.convertToRexNodes(SqlExprToRexConverterImpl.java:79)
>   at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.scala:111)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3328)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2357)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2005)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:646)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:627)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3181)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:563)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:148)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:135)
>   at 
> 

[GitHub] [flink] flinkbot edited a comment on issue #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python TableFunction

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#issuecomment-584123392
 
 
   
   ## CI report:
   
   * 68b071496379b090212eb961b2026578ee2c64e3 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/148190648) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5010)
 
   * 9c17e490eca0cba3ca56c8094042b7e97bb217a4 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148726395) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5121)
 
   * 67f8b75918a21c604569ef75f99663c08a26fcdf Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149164808) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5220)
 
   * f0fbdc6d21c7c57c51366e53a149dc8b8db118f8 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149250281) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5230)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#issuecomment-584007380
 
 
   
   ## CI report:
   
   * 86c4939042e8af2da1ac1e5900225f4f0310fa04 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148149333) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4991)
 
   * 18beb9a3c1ed78a6cbc1bc5ba9f96b51bbf8eeea Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148181501) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5000)
 
   * a9125de0859c43262d41c4cecee53ceeb807cf27 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148340324) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5041)
 
   * 52bc7dcf579ef6e5e466ec93451cbf56451b1d41 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148547890) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5093)
 
   * a307994428657e1025786975b7cf11544afa3ab8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148567327) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5104)
 
   * 4749ac089576b36668e412acdbe56f55b4ffbdc6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148576458) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5106)
 
   * 8c7c673b8bf46297d499db5725c127986533c8ff Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148711047) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5119)
 
   * b1f4018140c5b3b881de6da6d5f69422a4b09831 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149097149) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5200)
 
   * 342826120f528c06b9b811e4218a33ee3e4ba2d6 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149392860) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12941) Translate "Amazon AWS Kinesis Streams Connector" page into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-12941:

Parent: FLINK-16126
Issue Type: Sub-task  (was: Task)

> Translate "Amazon AWS Kinesis Streams Connector" page into Chinese
> --
>
> Key: FLINK-12941
> URL: https://issues.apache.org/jira/browse/FLINK-12941
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Jasper Yue
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kinesis.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/kinesis.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12942) Translate "Elasticsearch Connector" page into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-12942:

Parent: FLINK-16126
Issue Type: Sub-task  (was: Task)

> Translate "Elasticsearch Connector" page into Chinese
> -
>
> Key: FLINK-12942
> URL: https://issues.apache.org/jira/browse/FLINK-12942
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/elasticsearch.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/elasticsearch.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12941) Translate "Amazon AWS Kinesis Streams Connector" page into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-12941:

Parent: (was: FLINK-11529)
Issue Type: Task  (was: Sub-task)

> Translate "Amazon AWS Kinesis Streams Connector" page into Chinese
> --
>
> Key: FLINK-12941
> URL: https://issues.apache.org/jira/browse/FLINK-12941
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Jasper Yue
>Priority: Minor
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kinesis.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/kinesis.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12940) Translate "Apache Cassandra Connector" page into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-12940:

Parent: FLINK-16126
Issue Type: Sub-task  (was: Task)

> Translate "Apache Cassandra Connector" page into Chinese
> 
>
> Key: FLINK-12940
> URL: https://issues.apache.org/jira/browse/FLINK-12940
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/cassandra.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/cassandra.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12942) Translate "Elasticsearch Connector" page into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-12942:

Parent: (was: FLINK-11529)
Issue Type: Task  (was: Sub-task)

> Translate "Elasticsearch Connector" page into Chinese
> -
>
> Key: FLINK-12942
> URL: https://issues.apache.org/jira/browse/FLINK-12942
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/elasticsearch.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/elasticsearch.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-12940) Translate "Apache Cassandra Connector" page into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-12940:

Parent: (was: FLINK-11529)
Issue Type: Task  (was: Sub-task)

> Translate "Apache Cassandra Connector" page into Chinese
> 
>
> Key: FLINK-12940
> URL: https://issues.apache.org/jira/browse/FLINK-12940
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/cassandra.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/cassandra.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16126) Translate all connector related pages into Chinese

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16126:

Description: 
Translate all connector related pages into Chinese, including pages under 
`docs/dev/connectors/` and `docs/ops/filesystems/`. 

This is an umbrella issue to track all relative pages.

Connector pages under Batch API is not in the plan, because they will be 
dropped in the future. 

  was:
Translate all connector related pages into Chinese, including pages under 
`docs/dev/connectors/` and `docs/ops/filesystems/`. 

Connector pages under Batch API is not in the plan, because they will be 
dropped in the future. 


> Translate all connector related pages into Chinese
> --
>
> Key: FLINK-16126
> URL: https://issues.apache.org/jira/browse/FLINK-16126
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.11.0
>
>
> Translate all connector related pages into Chinese, including pages under 
> `docs/dev/connectors/` and `docs/ops/filesystems/`. 
> This is an umbrella issue to track all relative pages.
> Connector pages under Batch API is not in the plan, because they will be 
> dropped in the future. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16126) Translate all connector related pages into Chinese

2020-02-17 Thread Jark Wu (Jira)
Jark Wu created FLINK-16126:
---

 Summary: Translate all connector related pages into Chinese
 Key: FLINK-16126
 URL: https://issues.apache.org/jira/browse/FLINK-16126
 Project: Flink
  Issue Type: Task
  Components: chinese-translation, Documentation
Reporter: Jark Wu
 Fix For: 1.11.0


Translate all connector related pages into Chinese, including pages under 
`docs/dev/connectors/` and `docs/ops/filesystems/`. 

Connector pages under Batch API is not in the plan, because they will be 
dropped in the future. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16116) Remove shading from oss filesystems build

2020-02-17 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-16116:
--
Fix Version/s: 1.11.0
   1.10.1
Affects Version/s: 1.10.0

> Remove shading from oss filesystems build
> -
>
> Key: FLINK-16116
> URL: https://issues.apache.org/jira/browse/FLINK-16116
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.10.0
>Reporter: Yang Wang
>Priority: Major
> Fix For: 1.10.1, 1.11.0
>
>
> Since Flink will use plugin to load all the filesystem, the class conflict 
> will not be a problem. So just like S3, i suggest remove the shading for oss 
> filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16115) Aliyun oss filesystem could not work with plugin mechanism

2020-02-17 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-16115:
--
Fix Version/s: 1.11.0
   1.10.1

[~fly_in_gis] Just assigned to you, thanks for tracking and volunteering to fix 
the issue.

> Aliyun oss filesystem could not work with plugin mechanism
> --
>
> Key: FLINK-16115
> URL: https://issues.apache.org/jira/browse/FLINK-16115
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.10.0
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> From release-1.9, Flink suggest users to load all filesystem with plugin, 
> including oss. However, it could not work for oss filesystem. The root cause 
> is it does not shade the {{org.apache.flink.runtime.fs.hdfs}} and 
> {{org.apache.flink.runtime.util}}. So they will always be loaded by system 
> classloader and throw the following exceptions.
>  
> {code:java}
> 2020-02-17 17:28:47,247 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not 
> start cluster entrypoint StandaloneSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> at 
> org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:64)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.(Lorg/apache/flink/fs/shaded/hadoop3/org/apache/hadoop/fs/FileSystem;)V
> at 
> org.apache.flink.fs.osshadoop.OSSFileSystemFactory.create(OSSFileSystemFactory.java:85)
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:61)
> at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:441)
> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> ... 2 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16115) Aliyun oss filesystem could not work with plugin mechanism

2020-02-17 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li reassigned FLINK-16115:
-

Assignee: Yang Wang

> Aliyun oss filesystem could not work with plugin mechanism
> --
>
> Key: FLINK-16115
> URL: https://issues.apache.org/jira/browse/FLINK-16115
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.10.0
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Critical
>
> From release-1.9, Flink suggest users to load all filesystem with plugin, 
> including oss. However, it could not work for oss filesystem. The root cause 
> is it does not shade the {{org.apache.flink.runtime.fs.hdfs}} and 
> {{org.apache.flink.runtime.util}}. So they will always be loaded by system 
> classloader and throw the following exceptions.
>  
> {code:java}
> 2020-02-17 17:28:47,247 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not 
> start cluster entrypoint StandaloneSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> at 
> org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:64)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.(Lorg/apache/flink/fs/shaded/hadoop3/org/apache/hadoop/fs/FileSystem;)V
> at 
> org.apache.flink.fs.osshadoop.OSSFileSystemFactory.create(OSSFileSystemFactory.java:85)
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:61)
> at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:441)
> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> ... 2 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15988) Make JsonRowSerializationSchema's constructor private

2020-02-17 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15988.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Fixed in master(1.11.0): b43701f88fade8e927f3c50bad9544794075decb

> Make JsonRowSerializationSchema's constructor private 
> --
>
> Key: FLINK-15988
> URL: https://issues.apache.org/jira/browse/FLINK-15988
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Benchao Li
>Assignee: Benchao Li
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> public constructor in \{{JsonRowSerializationSchema}} has been deprecated 
> since 1.9.0, and leaves a TODO to make it private.
> IMO, it's ok to make it private in 1.11 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #11080: [FLINK-15988][json] Make JsonRowSerializationSchema's constructor private

2020-02-17 Thread GitBox
wuchong merged pull request #11080: [FLINK-15988][json] Make 
JsonRowSerializationSchema's constructor private
URL: https://github.com/apache/flink/pull/11080
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #11101: [FLINK-16068][table-planner-blink] Fix computed column with escaped k…

2020-02-17 Thread GitBox
wuchong merged pull request #11101: [FLINK-16068][table-planner-blink] Fix 
computed column with escaped k…
URL: https://github.com/apache/flink/pull/11101
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hequn8128 commented on issue #11051: [FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical Correlate RelNodes which are containers for Python TableFunction

2020-02-17 Thread GitBox
hequn8128 commented on issue #11051: 
[FLINK-15961][table-planner][table-planner-blink] Introduce Python Physical 
Correlate RelNodes which are containers for Python TableFunction
URL: https://github.com/apache/flink/pull/11051#issuecomment-587302161
 
 
   @flinkbot run travis


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#issuecomment-584007380
 
 
   
   ## CI report:
   
   * 86c4939042e8af2da1ac1e5900225f4f0310fa04 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148149333) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4991)
 
   * 18beb9a3c1ed78a6cbc1bc5ba9f96b51bbf8eeea Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148181501) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5000)
 
   * a9125de0859c43262d41c4cecee53ceeb807cf27 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148340324) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5041)
 
   * 52bc7dcf579ef6e5e466ec93451cbf56451b1d41 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148547890) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5093)
 
   * a307994428657e1025786975b7cf11544afa3ab8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148567327) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5104)
 
   * 4749ac089576b36668e412acdbe56f55b4ffbdc6 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148576458) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5106)
 
   * 8c7c673b8bf46297d499db5725c127986533c8ff Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148711047) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5119)
 
   * b1f4018140c5b3b881de6da6d5f69422a4b09831 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149097149) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5200)
 
   * 342826120f528c06b9b811e4218a33ee3e4ba2d6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16125) Make zookeeper.connect optional for Kafka connectors

2020-02-17 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038796#comment-17038796
 ] 

Jiangjie Qin commented on FLINK-16125:
--

Given this is an accidental unexpected change, I'd suggest to backport it to 
1.10 as well.

> Make zookeeper.connect optional for Kafka connectors
> 
>
> Key: FLINK-16125
> URL: https://issues.apache.org/jira/browse/FLINK-16125
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Assignee: Qingsheng Ren
>Priority: Major
>
> FLINK-14649 accidentally changed the connector option {{zookeeper.connect}} 
> from optional to required for all the Kafka connector versions, while it is 
> only required for 0.8. 
> The fix would be make it optional again. This does mean that people who are 
> using Kafka 0.8 might miss this option and get an error from Kafka code 
> instead of Flink code, but given that Kafka 0.8 probably has a small user 
> base now and users will still get an error. I think it is fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11116: [FLINK-15349] add 'create catalog' DDL to blink planner

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #6: [FLINK-15349] add 'create catalog' 
DDL to blink planner
URL: https://github.com/apache/flink/pull/6#issuecomment-587279527
 
 
   
   ## CI report:
   
   * 7d09ea38e74e801907d3f1da660c41f5cf739a29 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149381924) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5262)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11115: [FLINK-15710] Update BucketingSinkMigrationTest to restore from 1.10 savepoint

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #5: [FLINK-15710] Update 
BucketingSinkMigrationTest to restore from 1.10 savepoint
URL: https://github.com/apache/flink/pull/5#issuecomment-587272600
 
 
   
   ## CI report:
   
   * 6dc4e6aa324cff1a51061ac1db4add02e346ef9d Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149379567) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5261)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on a change in pull request #11116: [FLINK-15349] add 'create catalog' DDL to blink planner

2020-02-17 Thread GitBox
libenchao commented on a change in pull request #6: [FLINK-15349] add 
'create catalog' DDL to blink planner
URL: https://github.com/apache/flink/pull/6#discussion_r380455859
 
 

 ##
 File path: 
flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl
 ##
 @@ -56,6 +56,31 @@ SqlUseCatalog SqlUseCatalog() :
 }
 
 /**
+* Parses a create catalog statement.
+* CREATE CATALOG catalog_name [WITH (property_name=property_value, ...)];
+*/
+SqlCreate SqlCreateCatalog(Span s, boolean replace) :
+{
+SqlParserPos startPos;
+SqlIdentifier catalogName;
+SqlNodeList propertyList = SqlNodeList.EMPTY;
+}
+{
+ { startPos = getPos(); }
+catalogName = CompoundIdentifier()
+[
+
+propertyList = TableProperties()
+]
+{
+return new SqlCreateCatalog(startPos.plus(getPos()),
+catalogName,
+propertyList);
+}
+}
+
+
+/**
 
 Review comment:
   seems an unintentional indent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on a change in pull request #11116: [FLINK-15349] add 'create catalog' DDL to blink planner

2020-02-17 Thread GitBox
libenchao commented on a change in pull request #6: [FLINK-15349] add 
'create catalog' DDL to blink planner
URL: https://github.com/apache/flink/pull/6#discussion_r380455435
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/SqlToOperationConverter.java
 ##
 @@ -358,6 +364,22 @@ private Operation convertUseCatalog(SqlUseCatalog 
useCatalog) {
return new UseCatalogOperation(useCatalog.getCatalogName());
}
 
+   /** Convert CREATE CATALOG statement. */
+   private Operation convertCreateCatalog(SqlCreateCatalog 
sqlCreateCatalog) {
+   String catalogName = sqlCreateCatalog.catalogName();
+
+   // set with properties
+   Map properties = new HashMap<>();
+   sqlCreateCatalog.getPropertyList().getList().forEach(p ->
+   properties.put(((SqlTableOption) p).getKeyString(), 
((SqlTableOption) p).getValueString()));
 
 Review comment:
   maybe we can use `Collectors.toMap` ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11115: [FLINK-15710] Update BucketingSinkMigrationTest to restore from 1.10 savepoint

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #5: [FLINK-15710] Update 
BucketingSinkMigrationTest to restore from 1.10 savepoint
URL: https://github.com/apache/flink/pull/5#issuecomment-587272600
 
 
   
   ## CI report:
   
   * 6dc4e6aa324cff1a51061ac1db4add02e346ef9d Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149379567) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11116: [FLINK-15349] add 'create catalog' DDL to blink planner

2020-02-17 Thread GitBox
flinkbot commented on issue #6: [FLINK-15349] add 'create catalog' DDL to 
blink planner
URL: https://github.com/apache/flink/pull/6#issuecomment-587279527
 
 
   
   ## CI report:
   
   * 7d09ea38e74e801907d3f1da660c41f5cf739a29 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-17 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380450846
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/StreamTableSourceFactory.java
 ##
 @@ -39,7 +39,20 @@
 * @param properties normalized properties describing a stream table 
source.
 * @return the configured stream table source.
 */
-   StreamTableSource createStreamTableSource(Map 
properties);
+   default StreamTableSource createStreamTableSource(Map properties) {
+   return null;
+   }
+
+   /**
+* Creates and configures a {@link StreamTableSource} based on the given
+{@link Context}.
+*
+* @param context context of this table source.
+* @return the configured table source.
+*/
+   default StreamTableSource createStreamTableSource(Context context) {
 
 Review comment:
   if user use `TableSource`, there is no need to use 
`StreamTableSourceFactory`..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on issue #25: [FLINK-16106] Add PersistedList state primitive

2020-02-17 Thread GitBox
tzulitai commented on issue #25: [FLINK-16106] Add PersistedList state primitive
URL: https://github.com/apache/flink-statefun/pull/25#issuecomment-587273420
 
 
   @igalshilman as discussed offline, I've changed the primitive name to 
`PersistedAppendingBuffer`, and also emphasized a bit in the Javadocs the 
contracts of all supported operations.
   The most important part to review, IMO, would be the Javadocs of the 
operations (see method Javadocs of `PersistedAppendingBuffer`) so that we can 
pin the contracts that we want to be supporting. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11115: [FLINK-15710] Update BucketingSinkMigrationTest to restore from 1.10 savepoint

2020-02-17 Thread GitBox
flinkbot commented on issue #5: [FLINK-15710] Update 
BucketingSinkMigrationTest to restore from 1.10 savepoint
URL: https://github.com/apache/flink/pull/5#issuecomment-587272600
 
 
   
   ## CI report:
   
   * 6dc4e6aa324cff1a51061ac1db4add02e346ef9d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-17 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380448321
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/StreamTableSourceFactory.java
 ##
 @@ -48,4 +61,12 @@
default TableSource createTableSource(Map 
properties) {
return createStreamTableSource(properties);
}
+
+   /**
+* Only create a stream table source.
+*/
+   @Override
+   default TableSource createTableSource(Context context) {
+   return createStreamTableSource(context);
 
 Review comment:
   indicate users to implement `createStreamTableSource(context)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11116: [FLINK-15349] add 'create catalog' DDL to blink planner

2020-02-17 Thread GitBox
flinkbot commented on issue #6: [FLINK-15349] add 'create catalog' DDL to 
blink planner
URL: https://github.com/apache/flink/pull/6#issuecomment-587270611
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7d09ea38e74e801907d3f1da660c41f5cf739a29 (Tue Feb 18 
04:18:16 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15349) add "create catalog" DDL

2020-02-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15349:
---
Labels: pull-request-available  (was: )

> add "create catalog" DDL
> 
>
> Key: FLINK-15349
> URL: https://issues.apache.org/jira/browse/FLINK-15349
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
> some customers who have internal streaming platform requested this feature, 
> as it's not possible on a platform to load catalogs dynamically at runtime 
> now via sql client yaml. Catalog DDL will come into play



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 opened a new pull request #11116: [FLINK-15349] add 'create catalog' DDL to blink planner

2020-02-17 Thread GitBox
bowenli86 opened a new pull request #6: [FLINK-15349] add 'create catalog' 
DDL to blink planner
URL: https://github.com/apache/flink/pull/6
 
 
   ## What is the purpose of the change
   
   add 'create catalog' DDL to blink planner
   
   ## Brief change log
   
   - add ddl to sql parser and table environment
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - FlinkSqlParserImplTest
   - CatalogITCase in blink planner
   
   ## Does this pull request potentially affect one of the following parts:
   
   n/a
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)
   
   docs will be in a separate pr
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * d48b95539070679639d5e8c4e640b9a710d7 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/129403938) 
   * bc7ff380b3c3deb9751c0a596c8fef46c3b48ef3 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/129413892) 
   * 58fe983f436f82e015d7c3635708d60235b9f078 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131251050) 
   * c568fa423b04cccfd439fa5aa0a9bd9d032806f7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131302469) 
   * aa2b0e844d60c92dfcacb6e226034a9d1457298a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131417190) 
   * 05b11cacd5473e9a0248d4eb7d02761d5e3f427e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/138967122) 
   * 938ebf9eb90ebbd2ec75dd8d709d3e2990c68319 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139079022) 
   * 8c72f98dee4218e65841d5f8850dc8063b16439a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139083440) 
   * 0739a623232a0c45e761c2db3f7fd4eba8d84bce Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141526489) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3703)
 
   * 515f012f8565b4a86959e77a6d19b94ce78d3cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142068529) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3846)
 
   * 408305ba95416d7da6fb0f7143927d3f4123633b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143517119) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4181)
 
   * d8fa507e5175c73e1213f3c6ffe37a87dcd2ac72 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145299936) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4515)
 
   * e60d5318b1ebef839fb7b9fcf66e6c7233f9dc22 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149374761) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5260)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-17 Thread GitBox
wuchong commented on a change in pull request #11047: [FLINK-15912][table] Add 
Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380447021
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/StreamTableSourceFactory.java
 ##
 @@ -39,7 +39,20 @@
 * @param properties normalized properties describing a stream table 
source.
 * @return the configured stream table source.
 */
-   StreamTableSource createStreamTableSource(Map 
properties);
+   default StreamTableSource createStreamTableSource(Map properties) {
+   return null;
+   }
+
+   /**
+* Creates and configures a {@link StreamTableSource} based on the given
+{@link Context}.
+*
+* @param context context of this table source.
+* @return the configured table source.
+*/
+   default StreamTableSource createStreamTableSource(Context context) {
 
 Review comment:
   But he can still use the `TableSource createTableSource(context)` 
interface.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15349) add "create catalog" DDL

2020-02-17 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15349:
-
Summary: add "create catalog" DDL  (was: add "create catalog" DDL support)

> add "create catalog" DDL
> 
>
> Key: FLINK-15349
> URL: https://issues.apache.org/jira/browse/FLINK-15349
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
> Fix For: 1.11.0
>
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
> some customers who have internal streaming platform requested this feature, 
> as it's not possible on a platform to load catalogs dynamically at runtime 
> now via sql client yaml. Catalog DDL will come into play



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16125) Make zookeeper.connect optional for Kafka connectors

2020-02-17 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned FLINK-16125:


Assignee: Qingsheng Ren

> Make zookeeper.connect optional for Kafka connectors
> 
>
> Key: FLINK-16125
> URL: https://issues.apache.org/jira/browse/FLINK-16125
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Assignee: Qingsheng Ren
>Priority: Major
>
> FLINK-14649 accidentally changed the connector option {{zookeeper.connect}} 
> from optional to required for all the Kafka connector versions, while it is 
> only required for 0.8. 
> The fix would be make it optional again. This does mean that people who are 
> using Kafka 0.8 might miss this option and get an error from Kafka code 
> instead of Flink code, but given that Kafka 0.8 probably has a small user 
> base now and users will still get an error. I think it is fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16125) Make zookeeper.connect optional for Kafka connectors

2020-02-17 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038776#comment-17038776
 ] 

Jark Wu edited comment on FLINK-16125 at 2/18/20 4:10 AM:
--

Thanks [~becket_qin], if {{zookeeper.connect}} is only required for 0.8, and we 
only support 0.10+ since 1.11. Maybe we can make it optional? Or do you think 
we should fix it to 1.10 as well? 


was (Author: jark):
Thanks [~becket_qin], if {{zookeeper.connect}} is only required for 0.8, and we 
only support 0.10+ since 1.11. Maybe we can make it optional? Or do you think 
we should fix it to 1.10 too? 

> Make zookeeper.connect optional for Kafka connectors
> 
>
> Key: FLINK-16125
> URL: https://issues.apache.org/jira/browse/FLINK-16125
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Priority: Major
>
> FLINK-14649 accidentally changed the connector option {{zookeeper.connect}} 
> from optional to required for all the Kafka connector versions, while it is 
> only required for 0.8. 
> The fix would be make it optional again. This does mean that people who are 
> using Kafka 0.8 might miss this option and get an error from Kafka code 
> instead of Flink code, but given that Kafka 0.8 probably has a small user 
> base now and users will still get an error. I think it is fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16125) Make zookeeper.connect optional for Kafka connectors

2020-02-17 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038776#comment-17038776
 ] 

Jark Wu commented on FLINK-16125:
-

Thanks [~becket_qin], if {{zookeeper.connect}} is only required for 0.8, and we 
only support 0.10+ since 1.11. Maybe we can make it optional? Or do you think 
we should fix it to 1.10 too? 

> Make zookeeper.connect optional for Kafka connectors
> 
>
> Key: FLINK-16125
> URL: https://issues.apache.org/jira/browse/FLINK-16125
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Jiangjie Qin
>Priority: Major
>
> FLINK-14649 accidentally changed the connector option {{zookeeper.connect}} 
> from optional to required for all the Kafka connector versions, while it is 
> only required for 0.8. 
> The fix would be make it optional again. This does mean that people who are 
> using Kafka 0.8 might miss this option and get an error from Kafka code 
> instead of Flink code, but given that Kafka 0.8 probably has a small user 
> base now and users will still get an error. I think it is fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-17 Thread GitBox
JingsongLi commented on a change in pull request #11047: [FLINK-15912][table] 
Add Context to TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047#discussion_r380445524
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/StreamTableSourceFactory.java
 ##
 @@ -39,7 +39,20 @@
 * @param properties normalized properties describing a stream table 
source.
 * @return the configured stream table source.
 */
-   StreamTableSource createStreamTableSource(Map 
properties);
+   default StreamTableSource createStreamTableSource(Map properties) {
+   return null;
+   }
+
+   /**
+* Creates and configures a {@link StreamTableSource} based on the given
+{@link Context}.
+*
+* @param context context of this table source.
+* @return the configured table source.
+*/
+   default StreamTableSource createStreamTableSource(Context context) {
 
 Review comment:
   Somebody implements `StreamTableSourceFactory` looks like need this method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11115: [FLINK-15710] Update BucketingSinkMigrationTest to restore from 1.10 savepoint

2020-02-17 Thread GitBox
flinkbot commented on issue #5: [FLINK-15710] Update 
BucketingSinkMigrationTest to restore from 1.10 savepoint
URL: https://github.com/apache/flink/pull/5#issuecomment-587266953
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6dc4e6aa324cff1a51061ac1db4add02e346ef9d (Tue Feb 18 
03:58:44 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15710).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15710) Update BucketingSinkMigrationTest to restore from 1.10 savepoint

2020-02-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15710:
---
Labels: pull-request-available  (was: )

> Update BucketingSinkMigrationTest to restore from 1.10 savepoint
> 
>
> Key: FLINK-15710
> URL: https://issues.apache.org/jira/browse/FLINK-15710
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.11.0
>Reporter: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Update {{BucketingSinkMigrationTest}} to restore from 1.10 savepoint



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yanghua opened a new pull request #11115: [FLINK-15710] Update BucketingSinkMigrationTest to restore from 1.10 savepoint

2020-02-17 Thread GitBox
yanghua opened a new pull request #5: [FLINK-15710] Update 
BucketingSinkMigrationTest to restore from 1.10 savepoint
URL: https://github.com/apache/flink/pull/5
 
 
   
   
   ## What is the purpose of the change
   
   *This pull request updates BucketingSinkMigrationTest to restore from 1.10 
savepoint*
   
   
   ## Brief change log
   
 - *Update BucketingSinkMigrationTest to restore from 1.10 savepoint*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16005) Propagate yarn.application.classpath from client to TaskManager Classpath

2020-02-17 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038770#comment-17038770
 ] 

Yang Wang commented on FLINK-16005:
---

[~ZhenqiuHuang] This inspires me that maybe we could add a generic config 
options {{yarn.config.xxx}} to support override some Yarn configurations on 
Flink client and JobManager. Otherwise, in the future when we want to override 
other Yarn configurations, we have to add new flink config options.

For example, in this case, you could set 
{{yarn.config.yarn.application.classpath: your-specified-classpath}}.

> Propagate yarn.application.classpath from client to TaskManager Classpath
> -
>
> Key: FLINK-16005
> URL: https://issues.apache.org/jira/browse/FLINK-16005
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Zhenqiu Huang
>Priority: Critical
>
> When Flink users what to override the hadoop yarn container classpath, they 
> should just specify the yarn.application.classpath in yarn-site.xml from cli 
> side. But currently, the classpath setting can only be used in flink 
> application master, the classpath of TM is still determined by the setting in 
> yarn host.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * d48b95539070679639d5e8c4e640b9a710d7 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/129403938) 
   * bc7ff380b3c3deb9751c0a596c8fef46c3b48ef3 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/129413892) 
   * 58fe983f436f82e015d7c3635708d60235b9f078 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131251050) 
   * c568fa423b04cccfd439fa5aa0a9bd9d032806f7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131302469) 
   * aa2b0e844d60c92dfcacb6e226034a9d1457298a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131417190) 
   * 05b11cacd5473e9a0248d4eb7d02761d5e3f427e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/138967122) 
   * 938ebf9eb90ebbd2ec75dd8d709d3e2990c68319 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139079022) 
   * 8c72f98dee4218e65841d5f8850dc8063b16439a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139083440) 
   * 0739a623232a0c45e761c2db3f7fd4eba8d84bce Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141526489) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3703)
 
   * 515f012f8565b4a86959e77a6d19b94ce78d3cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142068529) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3846)
 
   * 408305ba95416d7da6fb0f7143927d3f4123633b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143517119) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4181)
 
   * d8fa507e5175c73e1213f3c6ffe37a87dcd2ac72 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145299936) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4515)
 
   * e60d5318b1ebef839fb7b9fcf66e6c7233f9dc22 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149374761) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangyang0918 commented on a change in pull request #11110: [FLINK-16111][k8s] Fix Kubernetes deployment not respecting 'taskmanager.cpu.cores'.

2020-02-17 Thread GitBox
wangyang0918 commented on a change in pull request #0: [FLINK-16111][k8s] 
Fix Kubernetes deployment not respecting 'taskmanager.cpu.cores'.
URL: https://github.com/apache/flink/pull/0#discussion_r380440613
 
 

 ##
 File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManager.java
 ##
 @@ -338,6 +339,7 @@ protected FlinkKubeClient createFlinkKubeClient() {
 
@Override
protected double getCpuCores(Configuration configuration) {
-   return 
flinkConfig.getDouble(KubernetesConfigOptions.TASK_MANAGER_CPU, 
numSlotsPerTaskManager);
+   double fallback = 
configuration.getDouble(KubernetesConfigOptions.TASK_MANAGER_CPU);
+   return 
TaskExecutorProcessUtils.getCpuCoresWithFallback(configuration, 
fallback).getValue().doubleValue();
 
 Review comment:
   Since K8s/Yarn/Mesos share the same logics to get cpu cores, maybe we could 
provide a unified method. Then we will not need to test for each implementation.
   ```
   public static  CPUResource getCpuCoresWithFallback(final Configuration 
config, ConfigOption fallback)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16006) Add host blacklist support for Flink YarnResourceManager

2020-02-17 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038759#comment-17038759
 ] 

Yang Wang commented on FLINK-16006:
---

For Kubernetes, we could the node selector to control where to assign the 
pod[1]. However, one difference is that the K8s Apiserver could not help us to 
persistent this information. We need to store it by Flink internally.

Even we could add some label for specified node that we do not want to schedule 
for a Flink cluster. I do not suggest to go in this direction. Since it will 
pollute the node labels and may be residual when the Flink cluster destroyed.

 

[1]. [https://kubernetes.io/docs/concepts/configuration/assign-pod-node/]

> Add host blacklist support for Flink YarnResourceManager
> 
>
> Key: FLINK-16006
> URL: https://issues.apache.org/jira/browse/FLINK-16006
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: Zhenqiu Huang
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #9782: [FLINK-14241][test] Add aarch64 
support for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-535826739
 
 
   
   ## CI report:
   
   * d48b95539070679639d5e8c4e640b9a710d7 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/129403938) 
   * bc7ff380b3c3deb9751c0a596c8fef46c3b48ef3 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/129413892) 
   * 58fe983f436f82e015d7c3635708d60235b9f078 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131251050) 
   * c568fa423b04cccfd439fa5aa0a9bd9d032806f7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131302469) 
   * aa2b0e844d60c92dfcacb6e226034a9d1457298a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/131417190) 
   * 05b11cacd5473e9a0248d4eb7d02761d5e3f427e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/138967122) 
   * 938ebf9eb90ebbd2ec75dd8d709d3e2990c68319 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139079022) 
   * 8c72f98dee4218e65841d5f8850dc8063b16439a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/139083440) 
   * 0739a623232a0c45e761c2db3f7fd4eba8d84bce Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141526489) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3703)
 
   * 515f012f8565b4a86959e77a6d19b94ce78d3cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142068529) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3846)
 
   * 408305ba95416d7da6fb0f7143927d3f4123633b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143517119) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4181)
 
   * d8fa507e5175c73e1213f3c6ffe37a87dcd2ac72 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/145299936) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4515)
 
   * e60d5318b1ebef839fb7b9fcf66e6c7233f9dc22 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16058) Could not start TaskManager in flink 1.10.0

2020-02-17 Thread BlaBlabla (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038752#comment-17038752
 ] 

BlaBlabla commented on FLINK-16058:
---

[~libenchao] thanks for noticing , I will seed to that email group in the 
future.  [~fly_in_gis] thanks also

> Could not start TaskManager  in flink 1.10.0
> 
>
> Key: FLINK-16058
> URL: https://issues.apache.org/jira/browse/FLINK-16058
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: BlaBlabla
>Priority: Major
>
> Hello ,
>  
> When I submit a  app on yarn in Flink 1.10.0:
> But there is a error could not find commons-cli package jar:
> {code:java}
> 2020-02-14 18:07:28,045 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container 
> container_e28_1578502086570_2319694_01_02.
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
> at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:647)
> at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545)
> at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
> at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.getDynamicPropertiesAsString(BootstrapTools.java:653)
> at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:578)
> at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:384)
> at java.lang.Iterable.forEach(Iterable.java:75)
> at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:366)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2020-02-14 18:07:28,046 INFO org.apache.flink.yarn.YarnResourceManager - 
> Requesting new TaskExecutor container with resources . 
> Number pending requests 1.
> 2020-02-14 18:07:28,047 INFO org.apache.flink.yarn.YarnResourceManager - 
> TaskExecutor container_e28_1578502086570_2319694_01_03 will be started on 
> ip-10-128-158-97.idata-server.shopee.io with TaskExecutorProcessSpec 
> {cpuCores=4.0, frameworkHeapSize=128.000mb (134217728 bytes), 
> frameworkOffHeapSize=128.000mb (134217728 bytes), taskHeapSize=2.403gb 
> (2580335775 bytes), taskOffHeapSize=0 bytes, networkMemSize=543.360mb 
> (569754262 bytes), managedMemorySize=2.123gb (2279017051 bytes), 
> jvmMetaspaceSize=96.000mb (100663296 bytes), jvmOverheadSize=614.400mb 
> (644245104 bytes)}.
> 2020-02-14 

[jira] [Closed] (FLINK-16058) Could not start TaskManager in flink 1.10.0

2020-02-17 Thread BlaBlabla (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BlaBlabla closed FLINK-16058.
-
Resolution: Won't Fix

> Could not start TaskManager  in flink 1.10.0
> 
>
> Key: FLINK-16058
> URL: https://issues.apache.org/jira/browse/FLINK-16058
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: BlaBlabla
>Priority: Major
>
> Hello ,
>  
> When I submit a  app on yarn in Flink 1.10.0:
> But there is a error could not find commons-cli package jar:
> {code:java}
> 2020-02-14 18:07:28,045 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container 
> container_e28_1578502086570_2319694_01_02.
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
> at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:647)
> at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267)
> at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553)
> at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545)
> at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
> at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438)
> at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.getDynamicPropertiesAsString(BootstrapTools.java:653)
> at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:578)
> at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:384)
> at java.lang.Iterable.forEach(Iterable.java:75)
> at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:366)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
> at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> 2020-02-14 18:07:28,046 INFO org.apache.flink.yarn.YarnResourceManager - 
> Requesting new TaskExecutor container with resources . 
> Number pending requests 1.
> 2020-02-14 18:07:28,047 INFO org.apache.flink.yarn.YarnResourceManager - 
> TaskExecutor container_e28_1578502086570_2319694_01_03 will be started on 
> ip-10-128-158-97.idata-server.shopee.io with TaskExecutorProcessSpec 
> {cpuCores=4.0, frameworkHeapSize=128.000mb (134217728 bytes), 
> frameworkOffHeapSize=128.000mb (134217728 bytes), taskHeapSize=2.403gb 
> (2580335775 bytes), taskOffHeapSize=0 bytes, networkMemSize=543.360mb 
> (569754262 bytes), managedMemorySize=2.123gb (2279017051 bytes), 
> jvmMetaspaceSize=96.000mb (100663296 bytes), jvmOverheadSize=614.400mb 
> (644245104 bytes)}.
> 2020-02-14 18:07:28,047 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container 
> 

[jira] [Commented] (FLINK-16110) LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) *PROCTIME*"

2020-02-17 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038749#comment-17038749
 ] 

Jingsong Lee commented on FLINK-16110:
--

Looks like misusing of {{asSummaryString}} and {{asSerializableString}}. 
[~jark] [~twalthr] I got your meaning.

But if users generate a {{TableSchema}} with {{TimestampType with ROWTIME 
kind}}. After the {{putTableSchema}} and {{getTableSchema}} of 
{{DescriptorProperties}}. The ROWTIME message will loose. This may occurs in 
HiveCatalog and SqlGateway. They need ser/deser TableSchema.

So my understanding is that we should not have any non-regular kind 
TimestampType in TableSchema, we should always use watermark?  So the future 
plan is adding check in TableSchema to ensue there is no non-regular kink 
TimestampType?

> LogicalTypeParser can't parse "TIMESTAMP(3) *ROWTIME*" and "TIMESTAMP(3) 
> *PROCTIME*"
> 
>
> Key: FLINK-16110
> URL: https://issues.apache.org/jira/browse/FLINK-16110
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
>
>  {{TIMESTAMP(3) *ROWTIME*}} is the string representation of 
> {{TimestampType(true, TimestampKind.ROWTIME, 3)}} , however 
> {{LogicalTypeParser}} can't convert it to  {{TimestampType(true, 
> TimestampKind.ROWTIME, 3)}}. 
> TIMESTAMP(3) *PROCTIME* is the same case.
> the exception looks like:
> {code}
> org.apache.flink.table.api.ValidationException: Could not parse type at 
> position 12: Unexpected token: *ROWTIME*
>  Input type string: TIMESTAMP(3) *ROWTIME*
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:371)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parsingError(LogicalTypeParser.java:380)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.parseTokens(LogicalTypeParser.java:357)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser$TokenParser.access$000(LogicalTypeParser.java:333)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:106)
>   at 
> org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(LogicalTypeParser.java:116)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14649) Flatten all the connector properties keys to make it easy to configure in DDL

2020-02-17 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038746#comment-17038746
 ] 

Jiangjie Qin commented on FLINK-14649:
--

It looks that the patch changed the {{zookeeper.connect}} from optional to 
required. For Kafka connectors, \{{zookeeper.connect}} is only required for 
0.8. So we should not make this required for all the versions. I created 
FLINK-16125 to fix this.

> Flatten all the connector properties keys to make it easy to configure in DDL
> -
>
> Key: FLINK-14649
> URL: https://issues.apache.org/jira/browse/FLINK-14649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> There are some connector properties are lists. For example, Kafka connector 
> specific properties have to been set in the following way:
> {code}
>  'connector.properties.0.key' = 'zookeeper.connect',
>   'connector.properties.0.value' = 'localhost:2181',
>   'connector.properties.1.key' = 'bootstrap.servers',
>   'connector.properties.1.value' = 'localhost:9092',
>   'connector.properties.2.key' = 'group.id',
>   'connector.properties.2.value' = 'testGroup',
> {code}
> It is complex and not intuitive to define in this way. In order to cooperate 
> with DDL better, we propose to flatten all the property keys. 
> It has some disadvantage to define in this way. 
> - Users need to keep track of the indices
> - The key space is not constant. Validation of keys would require prefix 
> magic and wildcards. Like in TableFactories: `connector.propertie.#.key`.
> - It is complex and not intuitive to define and document.
> See FLIP-86 for the proposed new properties. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16125) Make zookeeper.connect optional for Kafka connectors

2020-02-17 Thread Jiangjie Qin (Jira)
Jiangjie Qin created FLINK-16125:


 Summary: Make zookeeper.connect optional for Kafka connectors
 Key: FLINK-16125
 URL: https://issues.apache.org/jira/browse/FLINK-16125
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.10.0
Reporter: Jiangjie Qin


FLINK-14649 accidentally changed the connector option {{zookeeper.connect}} 
from optional to required for all the Kafka connector versions, while it is 
only required for 0.8. 

The fix would be make it optional again. This does mean that people who are 
using Kafka 0.8 might miss this option and get an error from Kafka code instead 
of Flink code, but given that Kafka 0.8 probably has a small user base now and 
users will still get an error. I think it is fine.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] buptljy commented on issue #11104: [FLINK-16051] Subtask ID in Overview-Subtasks should start from 1

2020-02-17 Thread GitBox
buptljy commented on issue #11104: [FLINK-16051] Subtask ID in 
Overview-Subtasks should start from 1
URL: https://github.com/apache/flink/pull/11104#issuecomment-587245879
 
 
   > I remember a similar discussion in the past (can't find it right now), 
with the conclusion being that the only proper solution would be to be 
consistent across the entire project.
   > However, at the time we did not see a good way to implement this, and I 
still don't see it now.
   > 
   > As a result we sticked with the existing inconsistency. Without a clear 
goal of what the final solution should be there is no point in taking 
intermediate steps, as in the worst case they go in the wrong direction.
   > 
   > Sure, starting the index with 1 is "nicer", but implies that in every log 
statement we have to remember to increment the subtask index.
   > At the same time, changing the subtask index for metrics/logs may create 
headaches for or even break existing setups.
   
   Makes sense to me. We can keep the JIRA issue for further discussion. 
Closing this now...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] buptljy closed pull request #11104: [FLINK-16051] Subtask ID in Overview-Subtasks should start from 1

2020-02-17 Thread GitBox
buptljy closed pull request #11104: [FLINK-16051] Subtask ID in 
Overview-Subtasks should start from 1
URL: https://github.com/apache/flink/pull/11104
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wangxiyuan commented on issue #9782: [FLINK-14241][test] Add aarch64 support for container e2e test

2020-02-17 Thread GitBox
wangxiyuan commented on issue #9782: [FLINK-14241][test] Add aarch64 support 
for container e2e test
URL: https://github.com/apache/flink/pull/9782#issuecomment-587245600
 
 
   Thanks for your review. Just fixed the merge conflic. @zentol can you take a 
look? Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11093: [FLINK-16055][hive] Avoid catalog functions when listing Hive built-i…

2020-02-17 Thread GitBox
JingsongLi commented on a change in pull request #11093: [FLINK-16055][hive] 
Avoid catalog functions when listing Hive built-i…
URL: https://github.com/apache/flink/pull/11093#discussion_r380425268
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShimV120.java
 ##
 @@ -148,6 +151,9 @@ public CatalogColumnStatisticsDataDate 
toFlinkDateColStats(ColumnStatisticsData
 
@Override
public Optional getBuiltInFunctionInfo(String name) {
+   if (isCatalogFunctionName(name)) {
+   return Optional.empty();
+   }
Optional functionInfo = getFunctionInfo(name);
 
if (functionInfo.isPresent() && 
isBuiltInFunctionInfo(functionInfo.get())) {
 
 Review comment:
   I have same question, why not filter by `isBuiltInFunctionInfo`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16117) Avoid register source in TableTestBase#addTableSource

2020-02-17 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038729#comment-17038729
 ] 

Zhenghua Gao commented on FLINK-16117:
--

Most of these tests are plan tests and there is no input data.

My initial thought is use tableEnv.connect to replace 
tableEnv.registerTableSource in TableTestBase#addTableSource.

 
 

> Avoid register source in TableTestBase#addTableSource
> -
>
> Key: FLINK-16117
> URL: https://issues.apache.org/jira/browse/FLINK-16117
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>
> This affects thousands of unit tests:
> 1) explainSourceAsString of CatalogSourceTable changes
> 2)JoinTest#testUDFInJoinCondition: SQL keywords must be escaped
> 3) GroupWindowTest#testTimestampEventTimeTumblingGroupWindowWithProperties: 
> Reference to a rowtime or proctime window required
> 4) SetOperatorsTest#testInWithProject: legacy type vs new type
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11114: FLINK-16105 Translate "User-defined Sources & Sinks" page of "Table API & SQL" into Chinese

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #4: FLINK-16105 Translate "User-defined 
Sources & Sinks" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/4#issuecomment-587215228
 
 
   
   ## CI report:
   
   * f5e4fa926b1399eadb5194f553747156211fa528 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149365275) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5259)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #11093: [FLINK-16055][hive] Avoid catalog functions when listing Hive built-i…

2020-02-17 Thread GitBox
lirui-apache commented on issue #11093: [FLINK-16055][hive] Avoid catalog 
functions when listing Hive built-i…
URL: https://github.com/apache/flink/pull/11093#issuecomment-587228616
 
 
   > Hi @lirui-apache thanks for the PR, but I'm not sure about it. If 
concerned about argument of the public api, shall we just check it and throw 
exception if the name contains "." in Flink as the input is not a conforming 
name?
   
   @bowenli86 I think the API contract (w/o this PR) is to get the 
`FunctionInfo` for a given name if it's a built-in function, and 
`Optional.empty` otherwise. This PR just returns `Optional.empty` for names 
containing "." because they cannot be built-in functions, which is in line with 
the contract.
   
   The reason to add this short circuit check is to avoid calling 
`FunctionRegistry.getFunctionInfo` for catalog function names, which can cause 
problem for tests. More details in the JIRA: 
https://issues.apache.org/jira/browse/FLINK-16055


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11114: FLINK-16105 Translate "User-defined Sources & Sinks" page of "Table API & SQL" into Chinese

2020-02-17 Thread GitBox
flinkbot edited a comment on issue #4: FLINK-16105 Translate "User-defined 
Sources & Sinks" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/4#issuecomment-587215228
 
 
   
   ## CI report:
   
   * f5e4fa926b1399eadb5194f553747156211fa528 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/149365275) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16115) Aliyun oss filesystem could not work with plugin mechanism

2020-02-17 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17038710#comment-17038710
 ] 

Yang Wang commented on FLINK-16115:
---

To fix this issue, we need to add the following relocations. And i have 
confirmed that it could work. For the long term, we should also remove the 
shading for oss filesystem. I have create another ticket 
[FLINK-16116|https://issues.apache.org/jira/browse/FLINK-16116] to track.

[~chesnay] could you please assign this ticket to me? I could work on this.

 
{code:java}


   org.apache.flink.runtime.fs.hdfs
   org.apache.flink.fs.osshadoop.common



   org.apache.flink.runtime.util
   org.apache.flink.fs.osshadoop.common
{code}
 

 

> Aliyun oss filesystem could not work with plugin mechanism
> --
>
> Key: FLINK-16115
> URL: https://issues.apache.org/jira/browse/FLINK-16115
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.10.0
>Reporter: Yang Wang
>Priority: Critical
>
> From release-1.9, Flink suggest users to load all filesystem with plugin, 
> including oss. However, it could not work for oss filesystem. The root cause 
> is it does not shade the {{org.apache.flink.runtime.fs.hdfs}} and 
> {{org.apache.flink.runtime.util}}. So they will always be loaded by system 
> classloader and throw the following exceptions.
>  
> {code:java}
> 2020-02-17 17:28:47,247 ERROR 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Could not 
> start cluster entrypoint StandaloneSessionClusterEntrypoint.
> org.apache.flink.runtime.entrypoint.ClusterEntrypointException: Failed to 
> initialize the cluster entrypoint StandaloneSessionClusterEntrypoint.
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:187)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> at 
> org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:64)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.(Lorg/apache/flink/fs/shaded/hadoop3/org/apache/hadoop/fs/FileSystem;)V
> at 
> org.apache.flink.fs.osshadoop.OSSFileSystemFactory.create(OSSFileSystemFactory.java:85)
> at 
> org.apache.flink.core.fs.PluginFileSystemFactory.create(PluginFileSystemFactory.java:61)
> at 
> org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:441)
> at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:362)
> at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100)
> at 
> org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89)
> at 
> org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:125)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:305)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:263)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:207)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
> at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> at 
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
> ... 2 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wsry commented on issue #11088: [FLINK-16012][runtime] Reduce the default number of buffers per channel from 2 to 1

2020-02-17 Thread GitBox
wsry commented on issue #11088: [FLINK-16012][runtime] Reduce the default 
number of buffers per channel from 2 to 1
URL: https://github.com/apache/flink/pull/11088#issuecomment-587227214
 
 
   Theoretically, reducing the number of buffers may break the data processing 
pipeline which can influence the performance. For verification, I hava tested 
the change using the flink micro benchmark and a simple benchmark job. 
Unfortunately, regressions are seen for both tests.
   
   For micro benchmark, the following are some results with regression (Because 
of the unstable result, I run each test three times.):
   
   Using 2 buffer:
   ```
   Benchmark (channelsFlushTimeout)  (writers)   
Mode  Cnt  Score  Error   Units
   
   networkThroughput1000,100ms  1   
thrpt   30  15972.952 ±  752.985  ops/ms
   networkThroughput1000,100ms  4   
thrpt   30  27650.498 ±  713.728  ops/ms
   
   networkThroughput1000,100ms  1   
thrpt   30  15566.705 ± 2007.335  ops/ms
   networkThroughput1000,100ms  4   
thrpt   30  27769.195 ± 1632.614  ops/ms
   
   networkThroughput1000,100ms  1   
thrpt   30  15598.175 ± 1671.515  ops/ms
   networkThroughput1000,100ms  4   
thrpt   30  27499.901 ± 1035.415  ops/ms
   ```
   
   Using 1 buffer:
   ```
   Benchmark (channelsFlushTimeout)  (writers)   
Mode  Cnt  Score  Error   Units
   
   networkThroughput1000,100ms  1   
thrpt   30  13116.610 ±  325.587  ops/ms
   networkThroughput1000,100ms  4   
thrpt   30  22837.502 ± 1024.360  ops/ms
   
   networkThroughput1000,100ms  1   
thrpt   30  11924.883 ± 1038.508  ops/ms
   networkThroughput1000,100ms  4   
thrpt   30  22823.586 ±  892.918  ops/ms
   
   networkThroughput1000,100ms  1   
thrpt   30  12960.345 ± 1596.465  ops/ms
   networkThroughput1000,100ms  4   
thrpt   30  23028.803 ±  933.609  ops/ms
   ```
   From the above results, we can see about 20% performance regression. For the 
benchmark job, there are also regressions (about 10% - 20%) in some cases where 
input channel numbers are small, for example 2 input channels, which means the 
number of buffer can be used is limited.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16124) Add a AWS Kinesis Stateful Functions Ingress

2020-02-17 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-16124:

Description: 
AWS Kinesis is also a popularly used sources for Apache Flink applications, 
given their capability to reset their consumer position to a specific offset 
that works well with Flink's fault-tolerance model.
This also applies to Stateful Functions, and having a shipped ingress for 
Kinesis supported will also ease the use of Stateful Functions for AWS users.

  was:AWS Kinesis is also a popularly used sources for Apache Flink 
applications, given their capability to reset their consumer position to a 
specific offset that works well with Flink's fault-tolerance model. This also 
applies to Stateful Functions, and having a shipped ingress for Kinesis 
supported will also ease the use of Stateful Functions for AWS users.


> Add a AWS Kinesis Stateful Functions Ingress
> 
>
> Key: FLINK-16124
> URL: https://issues.apache.org/jira/browse/FLINK-16124
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Affects Versions: statefun-1.1
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Major
>
> AWS Kinesis is also a popularly used sources for Apache Flink applications, 
> given their capability to reset their consumer position to a specific offset 
> that works well with Flink's fault-tolerance model.
> This also applies to Stateful Functions, and having a shipped ingress for 
> Kinesis supported will also ease the use of Stateful Functions for AWS users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16124) Add a AWS Kinesis Stateful Functions Ingress

2020-02-17 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai reassigned FLINK-16124:
---

Assignee: Tzu-Li (Gordon) Tai

> Add a AWS Kinesis Stateful Functions Ingress
> 
>
> Key: FLINK-16124
> URL: https://issues.apache.org/jira/browse/FLINK-16124
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Affects Versions: statefun-1.1
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>
> AWS Kinesis is also a popularly used sources for Apache Flink applications, 
> given their capability to reset their consumer position to a specific offset 
> that works well with Flink's fault-tolerance model.
> This also applies to Stateful Functions, and having a shipped ingress for 
> Kinesis supported will also ease the use of Stateful Functions for AWS users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16124) Add a AWS Kinesis Stateful Functions Ingress

2020-02-17 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-16124:
---

 Summary: Add a AWS Kinesis Stateful Functions Ingress
 Key: FLINK-16124
 URL: https://issues.apache.org/jira/browse/FLINK-16124
 Project: Flink
  Issue Type: New Feature
  Components: Stateful Functions
Affects Versions: statefun-1.1
Reporter: Tzu-Li (Gordon) Tai


AWS Kinesis is also a popularly used sources for Apache Flink applications, 
given their capability to reset their consumer position to a specific offset 
that works well with Flink's fault-tolerance model. This also applies to 
Stateful Functions, and having a shipped ingress for Kinesis supported will 
also ease the use of Stateful Functions for AWS users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   >