[GitHub] [flink] flinkbot edited a comment on pull request #12750: [FLINK-18413][streaming] CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12750:
URL: https://github.com/apache/flink/pull/12750#issuecomment-647920503


   
   ## CI report:
   
   * 98bb13417ab3edb063aa38194f529403278a5ee3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3941)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12054: [FLINK-17579] Allow user to set the prefix of TaskManager's ResourceID in standalone mode

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12054:
URL: https://github.com/apache/flink/pull/12054#issuecomment-626143079


   
   ## CI report:
   
   * 311c21707305392d6bde18038bb1d8413867419b UNKNOWN
   * e85a3c626720f035ff0379e849e36ea6e8db7249 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3938)
 
   * e6338bad0847a8842e66a928e48f98083b63cff9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17800) RocksDB optimizeForPointLookup results in missing time windows

2020-06-22 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-17800:
--
Priority: Blocker  (was: Critical)

Escalating the priority to Blocker since the previous observed coredump issue 
is confirmed to be a UT design problem with PR already in review, and the issue 
reported here would cause silent data loss thus a critical one for production 
usage and better get fixed in 1.11.0

> RocksDB optimizeForPointLookup results in missing time windows
> --
>
> Key: FLINK-17800
> URL: https://issues.apache.org/jira/browse/FLINK-17800
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Yordan Pavlov
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.10.2, 1.12.0
>
> Attachments: MissingWindows.scala, MyMissingWindows.scala, 
> MyMissingWindows.scala
>
>
> +My Setup:+
> We have been using the _RocksDb_ option of _optimizeForPointLookup_ and 
> running version 1.7 for years. Upon upgrading to Flink 1.10 we started 
> receiving a strange behavior of missing time windows on a streaming Flink 
> job. For the purpose of testing I experimented with previous Flink version 
> and (1.8, 1.9, 1.9.3) and non of them showed the problem
>  
> A sample of the code demonstrating the problem is here:
> {code:java}
>  val datastream = env
>  .addSource(KafkaSource.keyedElements(config.kafkaElements, 
> List(config.kafkaBootstrapServer)))
>  val result = datastream
>  .keyBy( _ => 1)
>  .timeWindow(Time.milliseconds(1))
>  .print()
> {code}
>  
>  
> The source consists of 3 streams (being either 3 Kafka partitions or 3 Kafka 
> topics), the elements in each of the streams are separately increasing. The 
> elements generate increasing timestamps using an event time and start from 1, 
> increasing by 1. The first partitions would consist of timestamps 1, 2, 10, 
> 15..., the second of 4, 5, 6, 11..., the third of 3, 7, 8, 9...
>  
> +What I observe:+
> The time windows would open as I expect for the first 127 timestamps. Then 
> there would be a huge gap with no opened windows, if the source has many 
> elements, then next open window would be having a timestamp in the thousands. 
> A gap of hundred of elements would be created with what appear to be 'lost' 
> elements. Those elements are not reported as late (if tested with the 
> ._sideOutputLateData_ operator). The way we have been using the option is by 
> setting in inside the config like so:
> ??etherbi.rocksDB.columnOptions.optimizeForPointLookup=268435456??
> We have been using it for performance reasons as we have huge RocksDB state 
> backend.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18414) Kafka Json connector in Table API support more option

2020-06-22 Thread DuBin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142639#comment-17142639
 ] 

DuBin commented on FLINK-18414:
---

[~jark] exactly, thanks for help!

> Kafka Json connector in Table API support more option
> -
>
> Key: FLINK-18414
> URL: https://issues.apache.org/jira/browse/FLINK-18414
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile), Table SQL / Ecosystem
>Affects Versions: 1.10.1
>Reporter: DuBin
>Priority: Major
>
> Currently, the Flink use a 
> 'org.apache.flink.formats.json.JsonRowDeserializationSchema' to deserialize 
> the record into Row if we define the Kafka Json Table Source.
> But the parser is hard-coded in the class :
> private final ObjectMapper objectMapper = new ObjectMapper();
> Imagine that the Json data source contains data like this:
> {"a":NaN,"b":1.2}
> or it contains some dirty data, it will throw exception in the deserialize 
> function all the time, because Kafka do not have a schema validation on Json 
> format.
>  
> So can we add more options in the 
> 'org.apache.flink.formats.json.JsonRowFormatFactory' , in the 
> 'org.apache.flink.formats.json.JsonRowFormatFactory#createDeserializationSchema'?
>  e.g. add more option for the objectMapper, some dirty data handler(just 
> return an empty row, defined by the user)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18414) Kafka Json connector in Table API support more option

2020-06-22 Thread DuBin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DuBin closed FLINK-18414.
-
Fix Version/s: 1.11.1
   Resolution: Fixed

> Kafka Json connector in Table API support more option
> -
>
> Key: FLINK-18414
> URL: https://issues.apache.org/jira/browse/FLINK-18414
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile), Table SQL / Ecosystem
>Affects Versions: 1.10.1
>Reporter: DuBin
>Priority: Major
> Fix For: 1.11.1
>
>
> Currently, the Flink use a 
> 'org.apache.flink.formats.json.JsonRowDeserializationSchema' to deserialize 
> the record into Row if we define the Kafka Json Table Source.
> But the parser is hard-coded in the class :
> private final ObjectMapper objectMapper = new ObjectMapper();
> Imagine that the Json data source contains data like this:
> {"a":NaN,"b":1.2}
> or it contains some dirty data, it will throw exception in the deserialize 
> function all the time, because Kafka do not have a schema validation on Json 
> format.
>  
> So can we add more options in the 
> 'org.apache.flink.formats.json.JsonRowFormatFactory' , in the 
> 'org.apache.flink.formats.json.JsonRowFormatFactory#createDeserializationSchema'?
>  e.g. add more option for the objectMapper, some dirty data handler(just 
> return an empty row, defined by the user)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18414) Kafka Json connector in Table API support more option

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142638#comment-17142638
 ] 

Jark Wu commented on FLINK-18414:
-

We added a new option {{json.ignore-parse-error}} in 1.11, is that what you are 
looking for? 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/formats/json.html#json-ignore-parse-errors

> Kafka Json connector in Table API support more option
> -
>
> Key: FLINK-18414
> URL: https://issues.apache.org/jira/browse/FLINK-18414
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile), Table SQL / Ecosystem
>Affects Versions: 1.10.1
>Reporter: DuBin
>Priority: Major
>
> Currently, the Flink use a 
> 'org.apache.flink.formats.json.JsonRowDeserializationSchema' to deserialize 
> the record into Row if we define the Kafka Json Table Source.
> But the parser is hard-coded in the class :
> private final ObjectMapper objectMapper = new ObjectMapper();
> Imagine that the Json data source contains data like this:
> {"a":NaN,"b":1.2}
> or it contains some dirty data, it will throw exception in the deserialize 
> function all the time, because Kafka do not have a schema validation on Json 
> format.
>  
> So can we add more options in the 
> 'org.apache.flink.formats.json.JsonRowFormatFactory' , in the 
> 'org.apache.flink.formats.json.JsonRowFormatFactory#createDeserializationSchema'?
>  e.g. add more option for the objectMapper, some dirty data handler(just 
> return an empty row, defined by the user)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12750: [FLINK-18413][streaming] CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread GitBox


flinkbot commented on pull request #12750:
URL: https://github.com/apache/flink/pull/12750#issuecomment-647920503


   
   ## CI report:
   
   * 98bb13417ab3edb063aa38194f529403278a5ee3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12737: [FLINK-17639][docs] Document which FileSystems are supported by the Streami…

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12737:
URL: https://github.com/apache/flink/pull/12737#issuecomment-647407770


   
   ## CI report:
   
   * cccbc2294423fce32765002c73acbb06b5ed40a4 UNKNOWN
   * ec2483d7dd2f9b5e20cdde4673c77eeaf377491b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3902)
 
   * f009270ba46fafe795a77626ed0acbb50389d4c2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3940)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18414) Kafka Json connector in Table API support more option

2020-06-22 Thread DuBin (Jira)
DuBin created FLINK-18414:
-

 Summary: Kafka Json connector in Table API support more option
 Key: FLINK-18414
 URL: https://issues.apache.org/jira/browse/FLINK-18414
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, 
SequenceFile), Table SQL / Ecosystem
Affects Versions: 1.10.1
Reporter: DuBin


Currently, the Flink use a 
'org.apache.flink.formats.json.JsonRowDeserializationSchema' to deserialize the 
record into Row if we define the Kafka Json Table Source.

But the parser is hard-coded in the class :

private final ObjectMapper objectMapper = new ObjectMapper();

Imagine that the Json data source contains data like this:

{"a":NaN,"b":1.2}

or it contains some dirty data, it will throw exception in the deserialize 
function all the time, because Kafka do not have a schema validation on Json 
format.

 

So can we add more options in the 
'org.apache.flink.formats.json.JsonRowFormatFactory' , in the 
'org.apache.flink.formats.json.JsonRowFormatFactory#createDeserializationSchema'?
 e.g. add more option for the objectMapper, some dirty data handler(just return 
an empty row, defined by the user)

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang commented on pull request #12719: [FLINK-18352] Make the default ClusterClientServiceLoader and ExecutorServiceLoader thread-safe.

2020-06-22 Thread GitBox


leonardBang commented on pull request #12719:
URL: https://github.com/apache/flink/pull/12719#issuecomment-647918747


   ignore my comment, @wuchong have hot-fixed this one



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] klion26 commented on a change in pull request #345: [FLINK-17491] Translate Training page on project website

2020-06-22 Thread GitBox


klion26 commented on a change in pull request #345:
URL: https://github.com/apache/flink-web/pull/345#discussion_r443967697



##
File path: training.zh.md
##
@@ -58,49 +57,49 @@ This training covers the fundamentals of Flink, including:
 
 
 
- Streaming 
Analytics
+ 流式分析
 
 
 
-Event Time Processing
+事件时间处理
 Watermarks
-Windows
+窗口
 
 
 
 
 
 
 
- 
Event-driven Applications
+ 事件驱动的应用
 
 
 
-Process Functions
-Timers
-Side Outputs
+处理函数
+定时器
+旁路输出
 
 
 
 
 
 
 
- Fault 
Tolerance
+ 容错
 
 
 
-Checkpoints and Savepoints
-Exactly-once vs. At-least-once
-Exactly-once End-to-end
+Checkpoints 和 Savepoints
+精确一次与至少一次
+端到端的精确一次
 
 
 
 
 
 
 
-Apache Flink Training Course   
+Apache Flink 培训课程   

Review comment:
   英文版实际上是有跳转的(这个应该是在项目中某个地方设置了跳转,我暂时没有找到),最终的链接是 
`https://ci.apache.org/projects/flink/flink-docs-master/learn-flink/index.html`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] aljoscha commented on pull request #12749: [FLINK-18411][tests] Fix CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread GitBox


aljoscha commented on pull request #12749:
URL: https://github.com/apache/flink/pull/12749#issuecomment-647909183


   LGTM once azure is green! Thanks for fixing it!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12728: [FLINK-18399] [table-api-java] fix TableResult#print can not print the result of unbounded stream query

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12728:
URL: https://github.com/apache/flink/pull/12728#issuecomment-647015309


   
   ## CI report:
   
   * ed25e698b05791c8bf8d4204ff32689cea42fc40 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3926)
 
   * 70dd97ae9379c3de39d32abdbd7bda496f214d53 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3939)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12054: [FLINK-17579] Allow user to set the prefix of TaskManager's ResourceID in standalone mode

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12054:
URL: https://github.com/apache/flink/pull/12054#issuecomment-626143079


   
   ## CI report:
   
   * 311c21707305392d6bde18038bb1d8413867419b UNKNOWN
   * e0c34c807adf768d904b5850c9ce4c8b4be4b70a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3887)
 
   * e85a3c626720f035ff0379e849e36ea6e8db7249 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3938)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12750: [FLINK-18413][streaming] CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread GitBox


flinkbot commented on pull request #12750:
URL: https://github.com/apache/flink/pull/12750#issuecomment-647907034


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 98bb13417ab3edb063aa38194f529403278a5ee3 (Tue Jun 23 
04:55:41 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18413).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang edited a comment on pull request #12719: [FLINK-18352] Make the default ClusterClientServiceLoader and ExecutorServiceLoader thread-safe.

2020-06-22 Thread GitBox


leonardBang edited a comment on pull request #12719:
URL: https://github.com/apache/flink/pull/12719#issuecomment-647906823


   Hello, @kl0u 
   I got a compile error, seems this PR direct go into release-1.11,  the 
line[1] should update too.
   I'd like to open a hotfix if you can help confirm this.
   [1] 
https://github.com/apache/flink/blob/release-1.11/flink-java/src/test/java/org/apache/flink/api/java/utils/CollectionExecutorTest.java#L54



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on pull request #12719: [FLINK-18352] Make the default ClusterClientServiceLoader and ExecutorServiceLoader thread-safe.

2020-06-22 Thread GitBox


leonardBang commented on pull request #12719:
URL: https://github.com/apache/flink/pull/12719#issuecomment-647906823


   Hello, @kl0u 
   I got a compile error, seems this PR direct go into release-1.11,  the line 
should update too.
   
https://github.com/apache/flink/blob/release-1.11/flink-java/src/test/java/org/apache/flink/api/java/utils/CollectionExecutorTest.java#L54
   I'd like to open a hotfix if you can help confirm this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] TsReaper opened a new pull request #12750: [FLINK-18413][streaming] CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread GitBox


TsReaper opened a new pull request #12750:
URL: https://github.com/apache/flink/pull/12750


   ## What is the purpose of the change
   
   Currently CollectResultIterator will only output results after a checkpoint 
in streaming mode. It will be strange for users who does not enable 
checkpointing to stuck forever when using the iterator without further notice.
   
   As 1.11 is about to release, we shall at least throw an exception notifying 
the user to enable checkpointing. We'll support iterators with at least once 
semantics or exactly once semantics without fault tolerance in the future.
   
   ## Brief change log
   
- CollectResultIterator now throws exception if user does not enable 
checkpointing in streaming mode
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
`TableEnvironmentITCase`.
   This change can also be verified by newly added tests in 
`TableEnvironmentTest`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18413) CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18413:
---
Labels: pull-request-available  (was: )

> CollectResultIterator should throw exception if user does not enable 
> checkpointing in streaming mode
> 
>
> Key: FLINK-18413
> URL: https://issues.apache.org/jira/browse/FLINK-18413
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently {{CollectResultIterator}} will only output results after a 
> checkpoint in streaming mode. It will be strange for users who does not 
> enable checkpointing to stuck forever when using the iterator without further 
> notice.
> As 1.11 is about to release, we shall at least throw an exception notifying 
> the user to enable checkpointing. We'll support iterators with at least once 
> semantics or exactly once semantics without fault tolerance in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18413) CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-18413:

Description: 
Currently {{CollectResultIterator}} will only output results after a checkpoint 
in streaming mode. It will be strange for users who does not enable 
checkpointing to stuck forever when using the iterator without further notice.

As 1.11 is about to release, we shall at least throw an exception notifying the 
user to enable checkpointing. We'll support iterators with at least once 
semantics or exactly once semantics without fault tolerance in the future.

  was:
Currently the iterator returned by {{TableResult.collect}} will only output 
results after a checkpoint in streaming mode. It will be strange for users who 
does not enable checkpointing to stuck forever when using the iterator without 
further notice.

As 1.11 is about to release, we shall at least throw an exception notifying the 
user to enable checkpointing. We'll support iterators with at least once 
semantics or exactly once semantics without fault tolerance in the future.


> CollectResultIterator should throw exception if user does not enable 
> checkpointing in streaming mode
> 
>
> Key: FLINK-18413
> URL: https://issues.apache.org/jira/browse/FLINK-18413
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Caizhi Weng
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently {{CollectResultIterator}} will only output results after a 
> checkpoint in streaming mode. It will be strange for users who does not 
> enable checkpointing to stuck forever when using the iterator without further 
> notice.
> As 1.11 is about to release, we shall at least throw an exception notifying 
> the user to enable checkpointing. We'll support iterators with at least once 
> semantics or exactly once semantics without fault tolerance in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18413) CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-18413:

Component/s: (was: Table SQL / API)
 API / DataStream

> CollectResultIterator should throw exception if user does not enable 
> checkpointing in streaming mode
> 
>
> Key: FLINK-18413
> URL: https://issues.apache.org/jira/browse/FLINK-18413
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.11.0
>Reporter: Caizhi Weng
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently the iterator returned by {{TableResult.collect}} will only output 
> results after a checkpoint in streaming mode. It will be strange for users 
> who does not enable checkpointing to stuck forever when using the iterator 
> without further notice.
> As 1.11 is about to release, we shall at least throw an exception notifying 
> the user to enable checkpointing. We'll support iterators with at least once 
> semantics or exactly once semantics without fault tolerance in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18413) CollectResultIterator should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng updated FLINK-18413:

Summary: CollectResultIterator should throw exception if user does not 
enable checkpointing in streaming mode  (was: TableResult.collect should throw 
exception if user does not enable checkpointing in streaming mode)

> CollectResultIterator should throw exception if user does not enable 
> checkpointing in streaming mode
> 
>
> Key: FLINK-18413
> URL: https://issues.apache.org/jira/browse/FLINK-18413
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Caizhi Weng
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently the iterator returned by {{TableResult.collect}} will only output 
> results after a checkpoint in streaming mode. It will be strange for users 
> who does not enable checkpointing to stuck forever when using the iterator 
> without further notice.
> As 1.11 is about to release, we shall at least throw an exception notifying 
> the user to enable checkpointing. We'll support iterators with at least once 
> semantics or exactly once semantics without fault tolerance in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18413) TableResult.collect should throw exception if user does not enable checkpointing in streaming mode

2020-06-22 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-18413:
---

 Summary: TableResult.collect should throw exception if user does 
not enable checkpointing in streaming mode
 Key: FLINK-18413
 URL: https://issues.apache.org/jira/browse/FLINK-18413
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.11.0
Reporter: Caizhi Weng
 Fix For: 1.11.0


Currently the iterator returned by {{TableResult.collect}} will only output 
results after a checkpoint in streaming mode. It will be strange for users who 
does not enable checkpointing to stuck forever when using the iterator without 
further notice.

As 1.11 is about to release, we shall at least throw an exception notifying the 
user to enable checkpointing. We'll support iterators with at least once 
semantics or exactly once semantics without fault tolerance in the future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12054: [FLINK-17579] Allow user to set the prefix of TaskManager's ResourceID in standalone mode

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12054:
URL: https://github.com/apache/flink/pull/12054#issuecomment-626143079


   
   ## CI report:
   
   * 311c21707305392d6bde18038bb1d8413867419b UNKNOWN
   * e0c34c807adf768d904b5850c9ce4c8b4be4b70a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3887)
 
   * e85a3c626720f035ff0379e849e36ea6e8db7249 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12728: [FLINK-18399] [table-api-java] fix TableResult#print can not print the result of unbounded stream query

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12728:
URL: https://github.com/apache/flink/pull/12728#issuecomment-647015309


   
   ## CI report:
   
   * ed25e698b05791c8bf8d4204ff32689cea42fc40 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3926)
 
   * 70dd97ae9379c3de39d32abdbd7bda496f214d53 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #12666: [FLINK-18313][doc] Update Hive dialect doc about VIEW

2020-06-22 Thread GitBox


wuchong commented on a change in pull request #12666:
URL: https://github.com/apache/flink/pull/12666#discussion_r443954497



##
File path: docs/dev/table/hive/hive_dialect.md
##
@@ -272,6 +272,8 @@ CREATE VIEW [IF NOT EXISTS] view_name [(column_name, ...) ]
 
  Alter
 
+**NOTE**: Altering view only works with table API, not supported via SQL 
client.

Review comment:
   ```suggestion
   **NOTE**: Altering view only works in Table API, but not supported via SQL 
client.
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #9648: [FLINK-13872] [docs-zh] Translate Operations Playground to Chinese

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #9648:
URL: https://github.com/apache/flink/pull/9648#issuecomment-529346361


   
   ## CI report:
   
   * 5e136cad1dd232cf77e44ad7063bdd629ac1bef4 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3896)
 
   * 11e3046934f53154f23d229d7418493570afda97 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3937)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16048) Support read/write confluent schema registry avro data from Kafka

2020-06-22 Thread Anshul Bansal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142610#comment-17142610
 ] 

Anshul Bansal commented on FLINK-16048:
---

[~danny0405], Any update on this , when it is going to merge to master ?

> Support read/write confluent schema registry avro data  from Kafka
> --
>
> Key: FLINK-16048
> URL: https://issues.apache.org/jira/browse/FLINK-16048
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Priority: Major
>  Labels: usability
> Fix For: 1.12.0
>
>
> I found SQL Kafka connector can not consume avro data that was serialized by 
> `KafkaAvroSerializer` and only can consume Row data with avro schema because 
> we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de 
> data in  `AvroRowFormatFactory`. 
> I think we should support this because `KafkaAvroSerializer` is very common 
> in Kafka.
> and someone met same question in stackoverflow[1].
> [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142609#comment-17142609
 ] 

Jark Wu commented on FLINK-18364:
-

cc [~lirui]

> A streaming sql cause "org.apache.flink.table.api.ValidationException: Type 
> TIMESTAMP(6) of table field 'rowtime' does not match with the physical type 
> TIMESTAMP(3) of the 'rowtime' field of the TableSink consumed type"
> ---
>
> Key: FLINK-18364
> URL: https://issues.apache.org/jira/browse/FLINK-18364
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: The input data is:
> 2015-02-15 10:15:00.0|1|paint|10
> 2015-02-15 10:24:15.0|2|paper|5
> 2015-02-15 10:24:45.0|3|brush|12
> 2015-02-15 10:58:00.0|4|paint|3
> 2015-02-15 11:10:00.0|5|paint|3
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> *The whole error is:*
>  Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
> of table field 'rowtime' does not match with the physical type TIMESTAMP(3) 
> of the 'rowtime' field of the TableSink consumed type. at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
>  at 
> org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
>  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
>  at scala.Option.map(Option.scala:146) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
>  at 
> org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
>  at 
> com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
>  at 
> com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
>  at 
> com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.executeQueryInternal(SelectOperation.java:219)
>  ... 48 more
> I run the sql by sql-gateway.
>  When I run it in a batch environment, the sql run well and can 

[jira] [Commented] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142607#comment-17142607
 ] 

Jark Wu commented on FLINK-18371:
-

cc [~Leonard Xu]

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 393|val_393
> 454|val_454
> 100|val_100
> 298|val_298
> 199|val_199
> 191|val_191
> 418|val_418
> 96|val_96
> 26|val_26
> 165|val_165
> 327|val_327
> 230|val_230
> 205|val_205
> 120|val_120
> 131|val_131
> 51|val_51
> 404|val_404
> 43|val_43
> 436|val_436
> 156|val_156
> 469|val_469
> 468|val_468
> 308|val_308
> 95|val_95
> 196|val_196
> 288|val_288
> 481|val_481
> 457|val_457
> 98|val_98
> 282|val_282
> 197|val_197
> 187|val_187
> 318|val_318
> 318|val_318
> 409|val_409
> 470|val_470
> 137|val_137
> 369|val_369
> 316|val_316
> 169|val_169
> 413|val_413
> 85|val_85
> 77|val_77
> 0|val_0
> 490|val_490
> 87|val_87
> 364|val_364
> 179|val_179
> 118|val_118
> 134|val_134
> 395|val_395
> 282|val_282
> 138|val_138
> 238|val_238
> 419|val_419
> 15|val_15
> 118|val_118
> 72|val_72
> 90|val_90
> 307|val_307
> 19|val_19
> 435|val_435
> 10|val_10
> 277|val_277
> 273|val_273
> 306|val_306
> 224|val_224
> 309|val_309
> 389|val_389
> 327|val_327
> 242|val_242
> 369|val_369
> 392|val_392
> 272|val_272
> 331|val_331
> 401|val_401
> 242|val_242
> 452|val_452
> 177|val_177
> 226|val_226
> 5|val_5
> 

[GitHub] [flink] lirui-apache commented on pull request #12682: [FLINK-18320][hive] Fix NOTICE and license files for flink-sql-connec…

2020-06-22 Thread GitBox


lirui-apache commented on pull request #12682:
URL: https://github.com/apache/flink/pull/12682#issuecomment-647895172


   Jingsong is on annual leave. @zentol could you please help merge this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18400) A streaming sql has "java.lang.NegativeArraySizeException"

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142606#comment-17142606
 ] 

Jark Wu commented on FLINK-18400:
-

cc [~TsReaper]

> A streaming sql has "java.lang.NegativeArraySizeException"
> --
>
> Key: FLINK-18400
> URL: https://issues.apache.org/jira/browse/FLINK-18400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use sql-gateway to run the sql.
> *The sql is:*
> SELECT x, round (x ) AS int8_value FROM (VALUES CAST (-2.5 AS DECIMAL(6,1)), 
> CAST(-1.5 AS DECIMAL(6,1)), CAST(-0.5 AS DECIMAL(6,1)), CAST(0.0 AS 
> DECIMAL(6,1)), CAST(0.5 AS DECIMAL(6,1)), CAST(1.5 AS DECIMAL(6,1)), CAST(2.5 
> AS DECIMAL(6,1))) t(x);
> The environment is streaming.
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> 2020-06-22 08:07:31
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
>   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>   at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.NegativeArraySizeException
>   at 
> org.apache.flink.table.data.binary.BinarySegmentUtils.readDecimalData(BinarySegmentUtils.java:1031)
>   at 
> org.apache.flink.table.data.binary.BinaryRowData.getDecimal(BinaryRowData.java:341)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:685)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:661)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:401)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1425)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1404)
>   at 
> 

[jira] [Commented] (FLINK-18365) The same sql in a batch env and a streaming env has different value.

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142604#comment-17142604
 ] 

Jark Wu commented on FLINK-18365:
-

[~zjwang], I postponed this issue to 1.11.1. The root cause is the same to 
FLINK-15395 that streaming mode doesn't emit results for global aggregates. 

> The same sql in a batch env and a streaming env has different value.
> 
>
> Key: FLINK-18365
> URL: https://issues.apache.org/jira/browse/FLINK-18365
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.1
>
>
> I use the sql-gateway to run this sql.
> *The input table is:*
>  CREATE TABLE `scott_dept` (
>   deptno INT,
>   dname VARCHAR,
>   loc VARCHAR
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
>   'format.type'='csv'
> )
> *The input data is:*
> 10|ACCOUNTING|NEW YORK
> 20|RESEARCH|DALLAS
> 30|SALES|CHICAGO
> 40|OPERATIONS|BOSTON
> *The sql is :*
> select deptno, (select count(*) from scott_emp where 1 = 0) as x from 
> scott_dept
> *The error:*
> In a batch environment, the result value is:10|0\n20|0\n30|0\n40|0
> In a streaming environment, the result value 
> is:10|None\n20|None\n30|None\n40|None



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12748: [FLINK-18324][docs-zh] Translate updated data type into Chinese

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12748:
URL: https://github.com/apache/flink/pull/12748#issuecomment-647889022


   
   ## CI report:
   
   * 2a146962687424a20b253ed3fcc42700e416375d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3935)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12737: [FLINK-17639][docs] Document which FileSystems are supported by the Streami…

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12737:
URL: https://github.com/apache/flink/pull/12737#issuecomment-647407770


   
   ## CI report:
   
   * cccbc2294423fce32765002c73acbb06b5ed40a4 UNKNOWN
   * ec2483d7dd2f9b5e20cdde4673c77eeaf377491b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3902)
 
   * f009270ba46fafe795a77626ed0acbb50389d4c2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12749: [FLINK-18411][tests] Fix CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12749:
URL: https://github.com/apache/flink/pull/12749#issuecomment-647889069


   
   ## CI report:
   
   * cd3ca6ba8e6cd4d00ce2dfb1c48eeb4e3f81392f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3936)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18365) The same sql in a batch env and a streaming env has different value.

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18365:

Fix Version/s: (was: 1.11.0)
   1.11.1

> The same sql in a batch env and a streaming env has different value.
> 
>
> Key: FLINK-18365
> URL: https://issues.apache.org/jira/browse/FLINK-18365
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.1
>
>
> I use the sql-gateway to run this sql.
> *The input table is:*
>  CREATE TABLE `scott_dept` (
>   deptno INT,
>   dname VARCHAR,
>   loc VARCHAR
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
>   'format.type'='csv'
> )
> *The input data is:*
> 10|ACCOUNTING|NEW YORK
> 20|RESEARCH|DALLAS
> 30|SALES|CHICAGO
> 40|OPERATIONS|BOSTON
> *The sql is :*
> select deptno, (select count(*) from scott_emp where 1 = 0) as x from 
> scott_dept
> *The error:*
> In a batch environment, the result value is:10|0\n20|0\n30|0\n40|0
> In a streaming environment, the result value 
> is:10|None\n20|None\n30|None\n40|None



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe commented on pull request #12577: [FLINK-17599][docs] Update documents due to FLIP-84

2020-06-22 Thread GitBox


godfreyhe commented on pull request #12577:
URL: https://github.com/apache/flink/pull/12577#issuecomment-647893295


   cc @twalthr 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #9648: [FLINK-13872] [docs-zh] Translate Operations Playground to Chinese

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #9648:
URL: https://github.com/apache/flink/pull/9648#issuecomment-529346361


   
   ## CI report:
   
   * 5e136cad1dd232cf77e44ad7063bdd629ac1bef4 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3896)
 
   * 11e3046934f53154f23d229d7418493570afda97 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #12728: [FLINK-18399] [table-api-java] fix TableResult#print can not print the result of unbounded stream query

2020-06-22 Thread GitBox


godfreyhe commented on pull request #12728:
URL: https://github.com/apache/flink/pull/12728#issuecomment-647892983


   cc @twalthr @wuchong



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18376) java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142602#comment-17142602
 ] 

Jark Wu commented on FLINK-18376:
-

I think  FLINK-17625  didn't fix this issue.

> java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction
> ---
>
> Key: FLINK-18376
> URL: https://issues.apache.org/jira/browse/FLINK-18376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
>
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:422)
>   at java.util.ArrayList.get(ArrayList.java:435)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.retractRecordWithoutRowNumber(RetractableTopNFunction.java:392)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.processElement(RetractableTopNFunction.java:160)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.processElement(RetractableTopNFunction.java:54)
>   at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] guoweiM commented on a change in pull request #12737: [FLINK-17639][docs] Document which FileSystems are supported by the Streami…

2020-06-22 Thread GitBox


guoweiM commented on a change in pull request #12737:
URL: https://github.com/apache/flink/pull/12737#discussion_r443945110



##
File path: docs/dev/connectors/streamfile_sink.zh.md
##
@@ -705,6 +705,9 @@ Hadoop 2.7 之前的版本不支持这个方法,因此 Flink 会报异常。
 重要提示 3: Flink 以及 `StreamingFileSink` 
不会覆盖已经提交的数据。因此如果尝试从一个包含 in-progress 文件的旧 checkpoint/savepoint 恢复,
 且这些 in-progress 文件会被接下来的成功 checkpoint 提交,Flink 会因为无法找到 in-progress 
文件而抛异常,从而恢复失败。
 
+重要提示 4: Flink 以及 `StreamingFileSink` 
不会覆盖已经提交的数据。因此如果尝试从一个包含 in-progress 文件的旧 checkpoint/savepoint 恢复,

Review comment:
   You are right. :P 
   I think it might be the md preview plugin that leads to the problem.
   I preview the result of the document using the md plugin. But after 
previewing it suspends the whole IntelliJ. I just kill the IntelliJ and commit 
the result and push the branch. :(( Maybe some changes are lost because of the 
killing. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18399) TableResult#print can not print the result of unbounded stream query

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18399:


Assignee: Zhijiang

> TableResult#print can not print the result of unbounded stream query
> 
>
> Key: FLINK-18399
> URL: https://issues.apache.org/jira/browse/FLINK-18399
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: godfrey he
>Assignee: Zhijiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> In current implementation of PrintUtils, all result will be collected to 
> local memory to compute column width first. this can works fine with batch 
> query and bounded stream query. but for unbounded stream query, the result 
> will be endless, so the result will be never printed. To solve this, we can 
> use fix-length strategy, and print a row immediately once the row is accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18399) TableResult#print can not print the result of unbounded stream query

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18399:


Assignee: godfrey he  (was: Zhijiang)

> TableResult#print can not print the result of unbounded stream query
> 
>
> Key: FLINK-18399
> URL: https://issues.apache.org/jira/browse/FLINK-18399
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> In current implementation of PrintUtils, all result will be collected to 
> local memory to compute column width first. this can works fine with batch 
> query and bounded stream query. but for unbounded stream query, the result 
> will be endless, so the result will be never printed. To solve this, we can 
> use fix-length strategy, and print a row immediately once the row is accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18351) ModuleManager creates a lot of duplicate/similar log messages

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142601#comment-17142601
 ] 

Jark Wu commented on FLINK-18351:
-

+1 to downgrade the log level. 

> ModuleManager creates a lot of duplicate/similar log messages
> -
>
> Key: FLINK-18351
> URL: https://issues.apache.org/jira/browse/FLINK-18351
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Shengkai Fang
>Priority: Major
>
> This is a follow up to FLINK-17977: 
> {code}
> 2020-06-03 15:02:11,982 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:11,988 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'sum' from 'core' module.
> 2020-06-03 15:02:12,139 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:12,159 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'equals' from 'core' module.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18376) java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction

2020-06-22 Thread LakeShen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142600#comment-17142600
 ] 

LakeShen commented on FLINK-18376:
--

The FLINK-17625 fix the ArrayIndexOutOfBoundsException in 
AppendOnlyTopNFunction, but now is RetractableTopNFunction.

I didn't know whether it fixed in flink 1.11 version.

> java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction
> ---
>
> Key: FLINK-18376
> URL: https://issues.apache.org/jira/browse/FLINK-18376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
>
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:422)
>   at java.util.ArrayList.get(ArrayList.java:435)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.retractRecordWithoutRowNumber(RetractableTopNFunction.java:392)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.processElement(RetractableTopNFunction.java:160)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.processElement(RetractableTopNFunction.java:54)
>   at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18365) The same sql in a batch env and a streaming env has different value.

2020-06-22 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142599#comment-17142599
 ] 

Zhijiang commented on FLINK-18365:
--

[~jark] Do you think it would be a blocker issue for release? The current 
priority is `Major` and will not be traced in kanban board.

> The same sql in a batch env and a streaming env has different value.
> 
>
> Key: FLINK-18365
> URL: https://issues.apache.org/jira/browse/FLINK-18365
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> I use the sql-gateway to run this sql.
> *The input table is:*
>  CREATE TABLE `scott_dept` (
>   deptno INT,
>   dname VARCHAR,
>   loc VARCHAR
> ) WITH (
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   
> 'connector.path'='/defender_test_data/daily_regression_stream_blink_sql_1.10/test_scalar/sources/scott_dept.csv',
>   'format.type'='csv'
> )
> *The input data is:*
> 10|ACCOUNTING|NEW YORK
> 20|RESEARCH|DALLAS
> 30|SALES|CHICAGO
> 40|OPERATIONS|BOSTON
> *The sql is :*
> select deptno, (select count(*) from scott_emp where 1 = 0) as x from 
> scott_dept
> *The error:*
> In a batch environment, the result value is:10|0\n20|0\n30|0\n40|0
> In a streaming environment, the result value 
> is:10|None\n20|None\n30|None\n40|None



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12748: [FLINK-18324][docs-zh] Translate updated data type into Chinese

2020-06-22 Thread GitBox


flinkbot commented on pull request #12748:
URL: https://github.com/apache/flink/pull/12748#issuecomment-647889022


   
   ## CI report:
   
   * 2a146962687424a20b253ed3fcc42700e416375d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12749: [FLINK-18411][tests] Fix CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread GitBox


flinkbot commented on pull request #12749:
URL: https://github.com/apache/flink/pull/12749#issuecomment-647889069


   
   ## CI report:
   
   * cd3ca6ba8e6cd4d00ce2dfb1c48eeb4e3f81392f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18385) Translate "DataGen SQL Connector" page into Chinese

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18385:

Fix Version/s: 1.11.0

> Translate "DataGen SQL Connector" page into Chinese
> ---
>
> Key: FLINK-18385
> URL: https://issues.apache.org/jira/browse/FLINK-18385
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: venn wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/datagen.html
> The markdown file is located in flink/docs/dev/table/connectors/datagen.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18385) Translate "DataGen SQL Connector" page into Chinese

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-18385.
---
Resolution: Fixed

- master (1.12.0): 0c20f25fd5a435143fd607dfbf3b18671d574fc2
- 1.11.0: cba18bde4a17832e01c39584c050c81b794987b6

> Translate "DataGen SQL Connector" page into Chinese
> ---
>
> Key: FLINK-18385
> URL: https://issues.apache.org/jira/browse/FLINK-18385
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: venn wu
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/datagen.html
> The markdown file is located in flink/docs/dev/table/connectors/datagen.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18376) java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction

2020-06-22 Thread LakeShen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142597#comment-17142597
 ] 

LakeShen commented on FLINK-18376:
--

Hi ,[~libenchao],the flink version is flink 1.10 , I will use the master to 
check whether it reproduce this later.

> java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction
> ---
>
> Key: FLINK-18376
> URL: https://issues.apache.org/jira/browse/FLINK-18376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
>
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:422)
>   at java.util.ArrayList.get(ArrayList.java:435)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.retractRecordWithoutRowNumber(RetractableTopNFunction.java:392)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.processElement(RetractableTopNFunction.java:160)
>   at 
> org.apache.flink.table.runtime.operators.rank.RetractableTopNFunction.processElement(RetractableTopNFunction.java:54)
>   at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong closed pull request #12730: [FLINK-18385][docs-zh] Translate "DataGen SQL Connector" page into Chinese

2020-06-22 Thread GitBox


wuchong closed pull request #12730:
URL: https://github.com/apache/flink/pull/12730


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18364) A streaming sql cause "org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) of table field 'rowtime' does not match with the physical type TIMESTAMP(3) of

2020-06-22 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142595#comment-17142595
 ] 

Zhijiang commented on FLINK-18364:
--

 [~lzljs3620320] is in vacation these days. [~ykt836] [~jark] could you help 
evaluate whether it is a blocker issue for release if it is really a bug. I 
found the current priority is 'Major`.

> A streaming sql cause "org.apache.flink.table.api.ValidationException: Type 
> TIMESTAMP(6) of table field 'rowtime' does not match with the physical type 
> TIMESTAMP(3) of the 'rowtime' field of the TableSink consumed type"
> ---
>
> Key: FLINK-18364
> URL: https://issues.apache.org/jira/browse/FLINK-18364
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: The input data is:
> 2015-02-15 10:15:00.0|1|paint|10
> 2015-02-15 10:24:15.0|2|paper|5
> 2015-02-15 10:24:45.0|3|brush|12
> 2015-02-15 10:58:00.0|4|paint|3
> 2015-02-15 11:10:00.0|5|paint|3
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> *The whole error is:*
>  Caused by: org.apache.flink.table.api.ValidationException: Type TIMESTAMP(6) 
> of table field 'rowtime' does not match with the physical type TIMESTAMP(3) 
> of the 'rowtime' field of the TableSink consumed type. at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$5(TypeMappingUtils.java:178)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:300)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:132)
>  at 
> org.apache.flink.table.types.logical.TimestampType.accept(TimestampType.java:152)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:267)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:174)
>  at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:368)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) at 
> org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:361)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:209)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:204)
>  at scala.Option.map(Option.scala:146) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:204)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:163)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:891) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:163)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1249)
>  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1241)
>  at 
> org.apache.flink.table.api.bridge.java.internal.StreamTableEnvironmentImpl.getPipeline(StreamTableEnvironmentImpl.java:317)
>  at 
> com.ververica.flink.table.gateway.context.ExecutionContext.createPipeline(ExecutionContext.java:223)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.lambda$null$0(SelectOperation.java:225)
>  at 
> com.ververica.flink.table.gateway.deployment.DeploymentUtil.wrapHadoopUserNameIfNeeded(DeploymentUtil.java:48)
>  at 
> com.ververica.flink.table.gateway.operation.SelectOperation.lambda$executeQueryInternal$1(SelectOperation.java:220)
>  at 
> com.ververica.flink.table.gateway.context.ExecutionContext.wrapClassLoaderWithException(ExecutionContext.java:197)
>  at 
> 

[jira] [Comment Edited] (FLINK-18351) ModuleManager creates a lot of duplicate/similar log messages

2020-06-22 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142594#comment-17142594
 ] 

Shengkai Fang edited comment on FLINK-18351 at 6/23/20, 3:36 AM:
-

The reason why we have repeated logs is that ModuleManager will print these log 
info when it loads function defination from loaded modules. I think users may 
pay more attention to function which they can't find defination rather than 
'normal' function. So we can just turn down the log level. 


was (Author: fsk119):
The reason why we have repeated logs is that ModuleManager will print these log 
info when it loads function defination from loaded modules. I think users may 
pay more attention to function which they can't find defination rather than 
'normal' function. So we can just turn down the log level from info to debug. 

> ModuleManager creates a lot of duplicate/similar log messages
> -
>
> Key: FLINK-18351
> URL: https://issues.apache.org/jira/browse/FLINK-18351
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Shengkai Fang
>Priority: Major
>
> This is a follow up to FLINK-17977: 
> {code}
> 2020-06-03 15:02:11,982 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:11,988 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'sum' from 'core' module.
> 2020-06-03 15:02:12,139 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:12,159 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'equals' from 'core' module.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18278) Translate new documenation homepage

2020-06-22 Thread zhangzhanhua (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142593#comment-17142593
 ] 

zhangzhanhua edited comment on FLINK-18278 at 6/23/20, 3:36 AM:


[~libenchao]  [~sjwiesman] 
In the doc:

 [Stateful Functions]({% if site.is_stable %} {{ 
{color:red}site.statefundocs_stable_baseurl{color} }} {% else %} {{ 
{color:red}site.statefundocs_baseurl{color} }} {% endif %})

Are there corresponding Chinese url variables for this?


was (Author: zhangzhanhua):
[~libenchao]  [~sjwiesman] 
In the doc:

 [Stateful Functions]({% if site.is_stable %} {{ 
{color:red}site.statefundocs_stable_baseurl{color} }} {% else %} {{ 
{color:red}site.statefundocs_baseurl{color} }} {% endif %})

Is there a corresponding Chinese url variable for this?

> Translate new documenation homepage
> ---
>
> Key: FLINK-18278
> URL: https://issues.apache.org/jira/browse/FLINK-18278
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Reporter: Seth Wiesman
>Assignee: zhangzhanhua
>Priority: Major
>
> Sync changes with FLINK-17981



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18351) ModuleManager creates a lot of duplicate/similar log messages

2020-06-22 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142594#comment-17142594
 ] 

Shengkai Fang commented on FLINK-18351:
---

The reason why we have repeated logs is that ModuleManager will print these log 
info when it loads function defination from loaded modules. I think users may 
pay more attention to function which they can't find defination rather than 
'normal' function. So we can just turn down the log level from info to debug. 

> ModuleManager creates a lot of duplicate/similar log messages
> -
>
> Key: FLINK-18351
> URL: https://issues.apache.org/jira/browse/FLINK-18351
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Shengkai Fang
>Priority: Major
>
> This is a follow up to FLINK-17977: 
> {code}
> 2020-06-03 15:02:11,982 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:11,988 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'sum' from 'core' module.
> 2020-06-03 15:02:12,139 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'as' from 'core' module.
> 2020-06-03 15:02:12,159 INFO  org.apache.flink.table.module.ModuleManager 
>  [] - Got FunctionDefinition 'equals' from 'core' module.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18278) Translate new documenation homepage

2020-06-22 Thread zhangzhanhua (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142593#comment-17142593
 ] 

zhangzhanhua commented on FLINK-18278:
--

[~libenchao]  [~sjwiesman] 
In the doc:

 [Stateful Functions]({% if site.is_stable %} {{ 
{color:red}site.statefundocs_stable_baseurl{color} }} {% else %} {{ 
{color:red}site.statefundocs_baseurl{color} }} {% endif %})

Is there a corresponding Chinese url variable for this?

> Translate new documenation homepage
> ---
>
> Key: FLINK-18278
> URL: https://issues.apache.org/jira/browse/FLINK-18278
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Reporter: Seth Wiesman
>Assignee: zhangzhanhua
>Priority: Major
>
> Sync changes with FLINK-17981



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15397) Streaming and batch has different value in the case of count function

2020-06-22 Thread xiaojin.wy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaojin.wy updated FLINK-15397:
---
Fix Version/s: 1.11.0
Affects Version/s: 1.11.0

> Streaming and batch has different value in the case of count function
> -
>
> Key: FLINK-15397
> URL: https://issues.apache.org/jira/browse/FLINK-15397
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0, 1.11.0
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> *The sql is:*
> CREATE TABLE `testdata` (
>   a INT,
>   b INT
> ) WITH (
>   
> 'connector.path'='/defender_test_data/daily_regression_batch_spark_1.10/test_group_agg/sources/testdata.csv',
>   'format.empty-column-as-null'='true',
>   'format.field-delimiter'='|',
>   'connector.type'='filesystem',
>   'format.derive-schema'='true',
>   'format.type'='csv'
> );
> SELECT COUNT(1) FROM testdata WHERE false;
> If the configuration's type is batch ,the result will be 0, but if the 
> configuration is streaming, there will be no value;
> *The configuration is:*
> execution:
>   planner: blink
>   type: streaming
> *The input data is:*
> {code:java}
> 1|1
> 1|2
> 2|1
> 2|2
> 3|1
> 3|2
> |1
> 3|
> |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-22 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142588#comment-17142588
 ] 

Zhijiang commented on FLINK-18371:
--

[~caoshaokan] [~libenchao], If this is really a bug, could you evaluate whether 
it is a blocker issue for release? The current priority is `Major`.

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 393|val_393
> 454|val_454
> 100|val_100
> 298|val_298
> 199|val_199
> 191|val_191
> 418|val_418
> 96|val_96
> 26|val_26
> 165|val_165
> 327|val_327
> 230|val_230
> 205|val_205
> 120|val_120
> 131|val_131
> 51|val_51
> 404|val_404
> 43|val_43
> 436|val_436
> 156|val_156
> 469|val_469
> 468|val_468
> 308|val_308
> 95|val_95
> 196|val_196
> 288|val_288
> 481|val_481
> 457|val_457
> 98|val_98
> 282|val_282
> 197|val_197
> 187|val_187
> 318|val_318
> 318|val_318
> 409|val_409
> 470|val_470
> 137|val_137
> 369|val_369
> 316|val_316
> 169|val_169
> 413|val_413
> 85|val_85
> 77|val_77
> 0|val_0
> 490|val_490
> 87|val_87
> 364|val_364
> 179|val_179
> 118|val_118
> 134|val_134
> 395|val_395
> 282|val_282
> 138|val_138
> 238|val_238
> 419|val_419
> 15|val_15
> 118|val_118
> 72|val_72
> 90|val_90
> 307|val_307
> 19|val_19
> 435|val_435
> 10|val_10
> 277|val_277
> 273|val_273
> 306|val_306
> 224|val_224
> 309|val_309
> 389|val_389
> 327|val_327
> 242|val_242
> 

[jira] [Commented] (FLINK-18400) A streaming sql has "java.lang.NegativeArraySizeException"

2020-06-22 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142584#comment-17142584
 ] 

Zhijiang commented on FLINK-18400:
--

[~jark] [~ykt836], could you check whether it is in your scope and should be a 
blocker issue for release?

> A streaming sql has "java.lang.NegativeArraySizeException"
> --
>
> Key: FLINK-18400
> URL: https://issues.apache.org/jira/browse/FLINK-18400
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use sql-gateway to run the sql.
> *The sql is:*
> SELECT x, round (x ) AS int8_value FROM (VALUES CAST (-2.5 AS DECIMAL(6,1)), 
> CAST(-1.5 AS DECIMAL(6,1)), CAST(-0.5 AS DECIMAL(6,1)), CAST(0.0 AS 
> DECIMAL(6,1)), CAST(0.5 AS DECIMAL(6,1)), CAST(1.5 AS DECIMAL(6,1)), CAST(2.5 
> AS DECIMAL(6,1))) t(x);
> The environment is streaming.
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
>
> 2020-06-22 08:07:31
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
>   at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>   at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.NegativeArraySizeException
>   at 
> org.apache.flink.table.data.binary.BinarySegmentUtils.readDecimalData(BinarySegmentUtils.java:1031)
>   at 
> org.apache.flink.table.data.binary.BinaryRowData.getDecimal(BinaryRowData.java:341)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:685)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:661)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:401)
>   at 
> org.apache.flink.table.data.util.DataFormatConverters$RowConverter.toExternalImpl(DataFormatConverters.java:1425)
>   at 
> 

[GitHub] [flink] flinkbot commented on pull request #12749: [FLINK-18411][tests] Fix CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread GitBox


flinkbot commented on pull request #12749:
URL: https://github.com/apache/flink/pull/12749#issuecomment-647881473


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cd3ca6ba8e6cd4d00ce2dfb1c48eeb4e3f81392f (Tue Jun 23 
03:13:14 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18411) CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142582#comment-17142582
 ] 

Jark Wu commented on FLINK-18411:
-

I will help to fix this. 

> CollectionExecutorTest failed to compiled in release-1.11
> -
>
> Key: FLINK-18411
> URL: https://issues.apache.org/jira/browse/FLINK-18411
> Project: Flink
>  Issue Type: Task
>  Components: Tests
>Affects Versions: 1.11.0
> Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
> clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
> skip=true"
>Reporter: xiaojin.wy
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: image-2020-06-23-10-00-45-519.png
>
>
>  !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18411) CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18411:
---
Labels: pull-request-available  (was: )

> CollectionExecutorTest failed to compiled in release-1.11
> -
>
> Key: FLINK-18411
> URL: https://issues.apache.org/jira/browse/FLINK-18411
> Project: Flink
>  Issue Type: Task
>  Components: Tests
>Affects Versions: 1.11.0
> Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
> clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
> skip=true"
>Reporter: xiaojin.wy
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
> Attachments: image-2020-06-23-10-00-45-519.png
>
>
>  !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #12749: [FLINK-18411][tests] Fix CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread GitBox


wuchong commented on pull request #12749:
URL: https://github.com/apache/flink/pull/12749#issuecomment-647881233


   Could you help to review this @kl0u @aljoscha ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong opened a new pull request #12749: [FLINK-18411][tests] Fix CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread GitBox


wuchong opened a new pull request #12749:
URL: https://github.com/apache/flink/pull/12749


   
   
   
   
   ## What is the purpose of the change
   
   I think this is caused by "[FLINK-18352] Make 
DefaultClusterClientServiceLoader/DefaultExecutorServiceLoader thread-safe", 
commit id: eeeff7a5. 
   
   ## Brief change log
   
   - Use `new DefaultExecutorServiceLoader()` instead of 
`DefaultExecutorServiceLoader.INSTANCE` which has been removed in eeeff7a5.
   
   ## Verifying this change
   
   N/A.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18411) CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142581#comment-17142581
 ] 

Jark Wu commented on FLINK-18411:
-

I think this is caused by [FLINK-18352] which doesn't update 
{{CollectionExecutorTest}}. This file doesn't exist in master branch. 

> CollectionExecutorTest failed to compiled in release-1.11
> -
>
> Key: FLINK-18411
> URL: https://issues.apache.org/jira/browse/FLINK-18411
> Project: Flink
>  Issue Type: Task
>  Components: Tests
>Affects Versions: 1.11.0
> Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
> clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
> skip=true"
>Reporter: xiaojin.wy
>Assignee: Jark Wu
>Priority: Blocker
> Fix For: 1.11.0
>
> Attachments: image-2020-06-23-10-00-45-519.png
>
>
>  !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18411) CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-18411:
-
Issue Type: Task  (was: Bug)

> CollectionExecutorTest failed to compiled in release-1.11
> -
>
> Key: FLINK-18411
> URL: https://issues.apache.org/jira/browse/FLINK-18411
> Project: Flink
>  Issue Type: Task
>  Components: Tests
>Affects Versions: 1.11.0
> Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
> clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
> skip=true"
>Reporter: xiaojin.wy
>Assignee: Jark Wu
>Priority: Blocker
> Fix For: 1.11.0
>
> Attachments: image-2020-06-23-10-00-45-519.png
>
>
>  !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18411) CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18411:

Summary: CollectionExecutorTest failed to compiled in release-1.11  (was: 
release-1.11 compile failed)

> CollectionExecutorTest failed to compiled in release-1.11
> -
>
> Key: FLINK-18411
> URL: https://issues.apache.org/jira/browse/FLINK-18411
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.11.0
> Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
> clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
> skip=true"
>Reporter: xiaojin.wy
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: image-2020-06-23-10-00-45-519.png
>
>
>  !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18411) CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-18411:
---

Assignee: Jark Wu

> CollectionExecutorTest failed to compiled in release-1.11
> -
>
> Key: FLINK-18411
> URL: https://issues.apache.org/jira/browse/FLINK-18411
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.11.0
> Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
> clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
> skip=true"
>Reporter: xiaojin.wy
>Assignee: Jark Wu
>Priority: Blocker
> Fix For: 1.11.0
>
> Attachments: image-2020-06-23-10-00-45-519.png
>
>
>  !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18411) CollectionExecutorTest failed to compiled in release-1.11

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18411:

Priority: Blocker  (was: Major)

> CollectionExecutorTest failed to compiled in release-1.11
> -
>
> Key: FLINK-18411
> URL: https://issues.apache.org/jira/browse/FLINK-18411
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.11.0
> Environment: The command is "/home/admin/apache-maven-3.2.5/bin/mvn 
> clean install -B -U -DskipTests -Drat.skip=true -Dcheckstyle.
> skip=true"
>Reporter: xiaojin.wy
>Priority: Blocker
> Fix For: 1.11.0
>
> Attachments: image-2020-06-23-10-00-45-519.png
>
>
>  !image-2020-06-23-10-00-45-519.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12726: [FLINK-14938][Connectors / ElasticSearch] Fix ConcurrentModificationException when Flink elasticsearch failure handler re-add indexre

2020-06-22 Thread GitBox


flinkbot edited a comment on pull request #12726:
URL: https://github.com/apache/flink/pull/12726#issuecomment-646966672


   
   ## CI report:
   
   * b62b537805ff8e3115bfaee1a5eb29c1b01cb447 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3933)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17845) Can't remove a table connector property with ALTER TABLE

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142575#comment-17142575
 ] 

Jark Wu commented on FLINK-17845:
-

> I thought users can add arbitrary properties for a table, no?

This is determined by the connector implementation. AFAIK, all the built-in 
connectors (except {{FileSystem}} connector) doesn't allow unsupported 
properties. 

> Can't remove a table connector property with ALTER TABLE
> 
>
> Key: FLINK-17845
> URL: https://issues.apache.org/jira/browse/FLINK-17845
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Fabian Hueske
>Priority: Major
>
> It is not possible to remove an existing table property from a table.
> Looking at the [source 
> code|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sqlexec/SqlToOperationConverter.java#L295]
>  this seems to be the intended semantics, but it seems counter-intuitive to 
> me.
> If I create a table with the following statement:
> {code}
> CREATE TABLE `testTable` (
>   id INT
> )
> WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topicX' = 'test',  -- Woops, I made a typo here
> [...]
> )
> {code}
> The statement will be successfully executed. However, the table cannot be 
> used due to the typo.
> Fixing the typo with the following DDL is not possible:
> {code}
> ALTER TABLE `testTable` SET (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = 'test',  -- Fixing the typo
> )
> {code}
> because the key {{connector.topicX}} is not removed.
> Right now it seems that the only way to fix a table with an invalid key is to 
> DROP and CREATE it. I think that this use case should be supported by ALTER 
> TABLE.
> I would even argue that the expected behavior is that previous properties are 
> removed and replaced by the new properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17845) Can't remove a table connector property with ALTER TABLE

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142575#comment-17142575
 ] 

Jark Wu edited comment on FLINK-17845 at 6/23/20, 2:59 AM:
---

> I thought users can add arbitrary properties for a table, no?

This is determined by the connector implementation. AFAIK, all the built-in 
connectors (except {{FileSystem}} connector) don't allow unsupported 
properties. 


was (Author: jark):
> I thought users can add arbitrary properties for a table, no?

This is determined by the connector implementation. AFAIK, all the built-in 
connectors (except {{FileSystem}} connector) doesn't allow unsupported 
properties. 

> Can't remove a table connector property with ALTER TABLE
> 
>
> Key: FLINK-17845
> URL: https://issues.apache.org/jira/browse/FLINK-17845
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Fabian Hueske
>Priority: Major
>
> It is not possible to remove an existing table property from a table.
> Looking at the [source 
> code|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sqlexec/SqlToOperationConverter.java#L295]
>  this seems to be the intended semantics, but it seems counter-intuitive to 
> me.
> If I create a table with the following statement:
> {code}
> CREATE TABLE `testTable` (
>   id INT
> )
> WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topicX' = 'test',  -- Woops, I made a typo here
> [...]
> )
> {code}
> The statement will be successfully executed. However, the table cannot be 
> used due to the typo.
> Fixing the typo with the following DDL is not possible:
> {code}
> ALTER TABLE `testTable` SET (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = 'test',  -- Fixing the typo
> )
> {code}
> because the key {{connector.topicX}} is not removed.
> Right now it seems that the only way to fix a table with an invalid key is to 
> DROP and CREATE it. I think that this use case should be supported by ALTER 
> TABLE.
> I would even argue that the expected behavior is that previous properties are 
> removed and replaced by the new properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18412) JdbcFullTest failed to compile on JDK11

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-18412:
---

Assignee: Jark Wu

> JdbcFullTest failed to compile on JDK11
> ---
>
> Key: FLINK-18412
> URL: https://issues.apache.org/jira/browse/FLINK-18412
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Assignee: Jark Wu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> master: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3928=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> release-1.11: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> 2020-06-22T20:19:50.2157534Z [INFO] 
> -
> 2020-06-22T20:19:50.2158031Z [ERROR] COMPILATION ERROR : 
> 2020-06-22T20:19:50.2158826Z [INFO] 
> -
> 2020-06-22T20:19:50.2159987Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[137,51]
>  cannot find symbol
> 2020-06-22T20:19:50.2160676Z   symbol:   variable f1
> 2020-06-22T20:19:50.2161236Z   location: variable tuple2 of type 
> java.lang.Object
> 2020-06-22T20:19:50.2163372Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[136,33]
>  incompatible types: cannot infer functional interface descriptor for 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.StatementExecutorFactory
> 2020-06-22T20:19:50.2164788Z [INFO] 2 errors 
> 2020-06-22T20:19:50.2165569Z [INFO] 
> -
> 2020-06-22T20:19:50.2166430Z [INFO] 
> 
> 2020-06-22T20:19:50.2167374Z [INFO] Reactor Summary:
> 2020-06-22T20:19:50.2167713Z [INFO] 
> 2020-06-22T20:19:50.2168486Z [INFO] force-shading 
> .. SUCCESS [  5.905 s]
> 2020-06-22T20:19:50.2169067Z [INFO] flink 
> .. SUCCESS [ 10.173 s]
> 2020-06-22T20:19:50.2169978Z [INFO] flink-annotations 
> .. SUCCESS [  1.637 s]
> 2020-06-22T20:19:50.2170980Z [INFO] flink-test-utils-parent 
>  SUCCESS [  0.117 s]
> 2020-06-22T20:19:50.2171877Z [INFO] flink-test-utils-junit 
> . SUCCESS [  1.224 s]
> 2020-06-22T20:19:50.2172896Z [INFO] flink-metrics 
> .. SUCCESS [  0.101 s]
> 2020-06-22T20:19:50.2173788Z [INFO] flink-metrics-core 
> . SUCCESS [  1.726 s]
> 2020-06-22T20:19:50.2175058Z [INFO] flink-core 
> . SUCCESS [ 29.372 s]
> 2020-06-22T20:19:50.2175982Z [INFO] flink-java 
> . SUCCESS [  5.577 s]
> 2020-06-22T20:19:50.2176868Z [INFO] flink-queryable-state 
> .. SUCCESS [  0.085 s]
> 2020-06-22T20:19:50.2177760Z [INFO] flink-queryable-state-client-java 
> .. SUCCESS [  1.619 s]
> 2020-06-22T20:19:50.2178600Z [INFO] flink-filesystems 
> .. SUCCESS [  0.105 s]
> 2020-06-22T20:19:50.2179500Z [INFO] flink-hadoop-fs 
>  SUCCESS [ 20.792 s]
> 2020-06-22T20:19:50.2180402Z [INFO] flink-runtime 
> .. SUCCESS [01:51 min]
> 2020-06-22T20:19:50.2181462Z [INFO] flink-scala 
>  SUCCESS [ 36.797 s]
> 2020-06-22T20:19:50.2182326Z [INFO] flink-mapr-fs 
> .. SUCCESS [  0.848 s]
> 2020-06-22T20:19:50.2183372Z [INFO] flink-filesystems :: 
> flink-fs-hadoop-shaded  SUCCESS [  4.422 s]
> 2020-06-22T20:19:50.2184407Z [INFO] flink-s3-fs-base 
> ... SUCCESS [  2.085 s]
> 2020-06-22T20:19:50.2185259Z [INFO] flink-s3-fs-hadoop 
> . SUCCESS [  6.051 s]
> 2020-06-22T20:19:50.2186131Z [INFO] flink-s3-fs-presto 
> . SUCCESS [ 10.325 s]
> 2020-06-22T20:19:50.2186990Z [INFO] flink-swift-fs-hadoop 
> .. SUCCESS [ 22.021 s]
> 2020-06-22T20:19:50.2187820Z [INFO] flink-oss-fs-hadoop 
>  SUCCESS [  6.407 s]
> 

[jira] [Closed] (FLINK-18412) JdbcFullTest failed to compile on JDK11

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-18412.
---
Resolution: Fixed

- master (1.12.0): de8a25ae597845f688376bde51dda5e419bf2085
- 1.11.0: 515aff8195beeb88ba9c4de555f5085b562800ca

> JdbcFullTest failed to compile on JDK11
> ---
>
> Key: FLINK-18412
> URL: https://issues.apache.org/jira/browse/FLINK-18412
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> master: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3928=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> release-1.11: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> 2020-06-22T20:19:50.2157534Z [INFO] 
> -
> 2020-06-22T20:19:50.2158031Z [ERROR] COMPILATION ERROR : 
> 2020-06-22T20:19:50.2158826Z [INFO] 
> -
> 2020-06-22T20:19:50.2159987Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[137,51]
>  cannot find symbol
> 2020-06-22T20:19:50.2160676Z   symbol:   variable f1
> 2020-06-22T20:19:50.2161236Z   location: variable tuple2 of type 
> java.lang.Object
> 2020-06-22T20:19:50.2163372Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[136,33]
>  incompatible types: cannot infer functional interface descriptor for 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.StatementExecutorFactory
> 2020-06-22T20:19:50.2164788Z [INFO] 2 errors 
> 2020-06-22T20:19:50.2165569Z [INFO] 
> -
> 2020-06-22T20:19:50.2166430Z [INFO] 
> 
> 2020-06-22T20:19:50.2167374Z [INFO] Reactor Summary:
> 2020-06-22T20:19:50.2167713Z [INFO] 
> 2020-06-22T20:19:50.2168486Z [INFO] force-shading 
> .. SUCCESS [  5.905 s]
> 2020-06-22T20:19:50.2169067Z [INFO] flink 
> .. SUCCESS [ 10.173 s]
> 2020-06-22T20:19:50.2169978Z [INFO] flink-annotations 
> .. SUCCESS [  1.637 s]
> 2020-06-22T20:19:50.2170980Z [INFO] flink-test-utils-parent 
>  SUCCESS [  0.117 s]
> 2020-06-22T20:19:50.2171877Z [INFO] flink-test-utils-junit 
> . SUCCESS [  1.224 s]
> 2020-06-22T20:19:50.2172896Z [INFO] flink-metrics 
> .. SUCCESS [  0.101 s]
> 2020-06-22T20:19:50.2173788Z [INFO] flink-metrics-core 
> . SUCCESS [  1.726 s]
> 2020-06-22T20:19:50.2175058Z [INFO] flink-core 
> . SUCCESS [ 29.372 s]
> 2020-06-22T20:19:50.2175982Z [INFO] flink-java 
> . SUCCESS [  5.577 s]
> 2020-06-22T20:19:50.2176868Z [INFO] flink-queryable-state 
> .. SUCCESS [  0.085 s]
> 2020-06-22T20:19:50.2177760Z [INFO] flink-queryable-state-client-java 
> .. SUCCESS [  1.619 s]
> 2020-06-22T20:19:50.2178600Z [INFO] flink-filesystems 
> .. SUCCESS [  0.105 s]
> 2020-06-22T20:19:50.2179500Z [INFO] flink-hadoop-fs 
>  SUCCESS [ 20.792 s]
> 2020-06-22T20:19:50.2180402Z [INFO] flink-runtime 
> .. SUCCESS [01:51 min]
> 2020-06-22T20:19:50.2181462Z [INFO] flink-scala 
>  SUCCESS [ 36.797 s]
> 2020-06-22T20:19:50.2182326Z [INFO] flink-mapr-fs 
> .. SUCCESS [  0.848 s]
> 2020-06-22T20:19:50.2183372Z [INFO] flink-filesystems :: 
> flink-fs-hadoop-shaded  SUCCESS [  4.422 s]
> 2020-06-22T20:19:50.2184407Z [INFO] flink-s3-fs-base 
> ... SUCCESS [  2.085 s]
> 2020-06-22T20:19:50.2185259Z [INFO] flink-s3-fs-hadoop 
> . SUCCESS [  6.051 s]
> 2020-06-22T20:19:50.2186131Z [INFO] flink-s3-fs-presto 
> . SUCCESS [ 10.325 s]
> 2020-06-22T20:19:50.2186990Z [INFO] flink-swift-fs-hadoop 
> .. SUCCESS [ 22.021 s]
> 2020-06-22T20:19:50.2187820Z [INFO] 

[jira] [Commented] (FLINK-18412) JdbcFullTest failed to compile on JDK11

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142572#comment-17142572
 ] 

Jark Wu commented on FLINK-18412:
-

Thanks for reporting this [~dian.fu].

> JdbcFullTest failed to compile on JDK11
> ---
>
> Key: FLINK-18412
> URL: https://issues.apache.org/jira/browse/FLINK-18412
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> master: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3928=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> release-1.11: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> 2020-06-22T20:19:50.2157534Z [INFO] 
> -
> 2020-06-22T20:19:50.2158031Z [ERROR] COMPILATION ERROR : 
> 2020-06-22T20:19:50.2158826Z [INFO] 
> -
> 2020-06-22T20:19:50.2159987Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[137,51]
>  cannot find symbol
> 2020-06-22T20:19:50.2160676Z   symbol:   variable f1
> 2020-06-22T20:19:50.2161236Z   location: variable tuple2 of type 
> java.lang.Object
> 2020-06-22T20:19:50.2163372Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[136,33]
>  incompatible types: cannot infer functional interface descriptor for 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.StatementExecutorFactory
> 2020-06-22T20:19:50.2164788Z [INFO] 2 errors 
> 2020-06-22T20:19:50.2165569Z [INFO] 
> -
> 2020-06-22T20:19:50.2166430Z [INFO] 
> 
> 2020-06-22T20:19:50.2167374Z [INFO] Reactor Summary:
> 2020-06-22T20:19:50.2167713Z [INFO] 
> 2020-06-22T20:19:50.2168486Z [INFO] force-shading 
> .. SUCCESS [  5.905 s]
> 2020-06-22T20:19:50.2169067Z [INFO] flink 
> .. SUCCESS [ 10.173 s]
> 2020-06-22T20:19:50.2169978Z [INFO] flink-annotations 
> .. SUCCESS [  1.637 s]
> 2020-06-22T20:19:50.2170980Z [INFO] flink-test-utils-parent 
>  SUCCESS [  0.117 s]
> 2020-06-22T20:19:50.2171877Z [INFO] flink-test-utils-junit 
> . SUCCESS [  1.224 s]
> 2020-06-22T20:19:50.2172896Z [INFO] flink-metrics 
> .. SUCCESS [  0.101 s]
> 2020-06-22T20:19:50.2173788Z [INFO] flink-metrics-core 
> . SUCCESS [  1.726 s]
> 2020-06-22T20:19:50.2175058Z [INFO] flink-core 
> . SUCCESS [ 29.372 s]
> 2020-06-22T20:19:50.2175982Z [INFO] flink-java 
> . SUCCESS [  5.577 s]
> 2020-06-22T20:19:50.2176868Z [INFO] flink-queryable-state 
> .. SUCCESS [  0.085 s]
> 2020-06-22T20:19:50.2177760Z [INFO] flink-queryable-state-client-java 
> .. SUCCESS [  1.619 s]
> 2020-06-22T20:19:50.2178600Z [INFO] flink-filesystems 
> .. SUCCESS [  0.105 s]
> 2020-06-22T20:19:50.2179500Z [INFO] flink-hadoop-fs 
>  SUCCESS [ 20.792 s]
> 2020-06-22T20:19:50.2180402Z [INFO] flink-runtime 
> .. SUCCESS [01:51 min]
> 2020-06-22T20:19:50.2181462Z [INFO] flink-scala 
>  SUCCESS [ 36.797 s]
> 2020-06-22T20:19:50.2182326Z [INFO] flink-mapr-fs 
> .. SUCCESS [  0.848 s]
> 2020-06-22T20:19:50.2183372Z [INFO] flink-filesystems :: 
> flink-fs-hadoop-shaded  SUCCESS [  4.422 s]
> 2020-06-22T20:19:50.2184407Z [INFO] flink-s3-fs-base 
> ... SUCCESS [  2.085 s]
> 2020-06-22T20:19:50.2185259Z [INFO] flink-s3-fs-hadoop 
> . SUCCESS [  6.051 s]
> 2020-06-22T20:19:50.2186131Z [INFO] flink-s3-fs-presto 
> . SUCCESS [ 10.325 s]
> 2020-06-22T20:19:50.2186990Z [INFO] flink-swift-fs-hadoop 
> .. SUCCESS [ 22.021 s]
> 2020-06-22T20:19:50.2187820Z [INFO] flink-oss-fs-hadoop 
>  

[jira] [Closed] (FLINK-17678) Create flink-sql-connector-hbase module to shade HBase

2020-06-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-17678.
---
Resolution: Fixed

Fixed in master (1.12.0): 6c7416a733505ca2dc16e68208ac718953a7eb0e

> Create flink-sql-connector-hbase module to shade HBase
> --
>
> Key: FLINK-17678
> URL: https://issues.apache.org/jira/browse/FLINK-17678
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Reporter: ShenDa
>Assignee: ShenDa
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, flink doesn't contains a hbase uber jar, so users have to add 
> hbase dependency manually.
> Could I create new module called flink-sql-connector-hbase like elasticsaerch 
> and kafka sql -connector.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12748: [FLINK-18324][docs-zh] Translate updated data type into Chinese

2020-06-22 Thread GitBox


flinkbot commented on pull request #12748:
URL: https://github.com/apache/flink/pull/12748#issuecomment-647876138


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2a146962687424a20b253ed3fcc42700e416375d (Tue Jun 23 
02:55:07 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18324) Translate updated data type and function page into Chinese

2020-06-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18324:
---
Labels: pull-request-available  (was: )

> Translate updated data type and function page into Chinese
> --
>
> Key: FLINK-18324
> URL: https://issues.apache.org/jira/browse/FLINK-18324
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / API
>Reporter: Timo Walther
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> The Chinese translations of the pages updated in FLINK-18248 and FLINK-18065 
> need an update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18116) Manually test E2E performance on Flink 1.11

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18116:


Assignee: Aihua Li  (was: Zhijiang)

> Manually test E2E performance on Flink 1.11
> ---
>
> Key: FLINK-18116
> URL: https://issues.apache.org/jira/browse/FLINK-18116
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core, API / DataStream, API / State Processor, 
> Build System, Client / Job Submission
>Affects Versions: 1.11.0
>Reporter: Aihua Li
>Assignee: Aihua Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.11.0
>
>
> it's mainly to verify the performance don't less than 1.10 version by 
> checking the metrics of end-to-end performance test,such as qps,latency .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18116) Manually test E2E performance on Flink 1.11

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18116:


Assignee: Zhijiang  (was: Aihua Li)

> Manually test E2E performance on Flink 1.11
> ---
>
> Key: FLINK-18116
> URL: https://issues.apache.org/jira/browse/FLINK-18116
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core, API / DataStream, API / State Processor, 
> Build System, Client / Job Submission
>Affects Versions: 1.11.0
>Reporter: Aihua Li
>Assignee: Zhijiang
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.11.0
>
>
> it's mainly to verify the performance don't less than 1.10 version by 
> checking the metrics of end-to-end performance test,such as qps,latency .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] liyubin117 opened a new pull request #12748: [FLINK-18324][docs-zh] Translate updated data type into Chinese

2020-06-22 Thread GitBox


liyubin117 opened a new pull request #12748:
URL: https://github.com/apache/flink/pull/12748


   ## What is the purpose of the change
   
   Translate updated data type into Chinese
   file locate  flink/docs/dev/table/types.zh.md
   https://ci.apache.org/projects/flink/flink-docs-master/dev/table/types.html
   
   ## Brief change log
   
   * translate `flink/docs/dev/table/types.zh.md`
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18115) Manually test fault-tolerance stability on Flink 1.11

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18115:


Assignee: Aihua Li  (was: Zhijiang)

> Manually test fault-tolerance stability on Flink 1.11
> -
>
> Key: FLINK-18115
> URL: https://issues.apache.org/jira/browse/FLINK-18115
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core, API / State Processor, Build System, Client 
> / Job Submission
>Affects Versions: 1.11.0
>Reporter: Aihua Li
>Assignee: Aihua Li
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.11.0
>
>
> It mainly checks the flink job can recover from  various unabnormal 
> situations including disk full, network interruption, zk unable to connect, 
> rpc message timeout, etc. 
> If job can't be recoverd it means test failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18115) Manually test fault-tolerance stability on Flink 1.11

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18115:


Assignee: Zhijiang  (was: Aihua Li)

> Manually test fault-tolerance stability on Flink 1.11
> -
>
> Key: FLINK-18115
> URL: https://issues.apache.org/jira/browse/FLINK-18115
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core, API / State Processor, Build System, Client 
> / Job Submission
>Affects Versions: 1.11.0
>Reporter: Aihua Li
>Assignee: Zhijiang
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.11.0
>
>
> It mainly checks the flink job can recover from  various unabnormal 
> situations including disk full, network interruption, zk unable to connect, 
> rpc message timeout, etc. 
> If job can't be recoverd it means test failed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-22 Thread GitBox


wuchong merged pull request #12687:
URL: https://github.com/apache/flink/pull/12687


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12687: [FLINK-17678][hbase] Support fink-sql-connector-hbase

2020-06-22 Thread GitBox


wuchong commented on pull request #12687:
URL: https://github.com/apache/flink/pull/12687#issuecomment-647872696


   Build is passed, merging...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16835) Replace TableConfig with Configuration

2020-06-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142559#comment-17142559
 ] 

Jark Wu commented on FLINK-16835:
-

+1 to not introduce {{DECIMAL_CONTEXT}} and +1 for {{table.local-time-zone}}.

> Replace TableConfig with Configuration
> --
>
> Key: FLINK-16835
> URL: https://issues.apache.org/jira/browse/FLINK-16835
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Major
>
> In order to allow reading and writing of configuration from a file or 
> string-based properties. We should consider removing {{TableConfig}} and 
> fully rely on a Configuration-based object with {{ConfigOptions}}.
> This effort was partially already started which is why 
> {{TableConfig.getConfiguration}} exists.
> However, we should clarify if we would like to have control and traceability 
> over layered configurations such as {{flink-conf,yaml < 
> StreamExecutionEnvironment < TableEnvironment < Query}}. Maybe the 
> {{Configuration}} class is not the right abstraction for this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #12727: [FLINK-17292][docs] Translate Fault Tolerance training lesson to Chinese

2020-06-22 Thread GitBox


wuchong commented on pull request #12727:
URL: https://github.com/apache/flink/pull/12727#issuecomment-647871033


   @klion26  could you help to review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18313) Update Hive dialect doc about VIEW

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18313:


Assignee: Rui Li  (was: Zhijiang)

> Update Hive dialect doc about VIEW
> --
>
> Key: FLINK-18313
> URL: https://issues.apache.org/jira/browse/FLINK-18313
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Need to update the doc to mention:
> # Views created in Flink cannot be queires in Hive
> # ALTER VIEW is not supported in SQL Client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18313) Update Hive dialect doc about VIEW

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18313:


Assignee: Rui Li  (was: Rui Li)

> Update Hive dialect doc about VIEW
> --
>
> Key: FLINK-18313
> URL: https://issues.apache.org/jira/browse/FLINK-18313
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Need to update the doc to mention:
> # Views created in Flink cannot be queires in Hive
> # ALTER VIEW is not supported in SQL Client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18313) Update Hive dialect doc about VIEW

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18313:


Assignee: Zhijiang  (was: Rui Li)

> Update Hive dialect doc about VIEW
> --
>
> Key: FLINK-18313
> URL: https://issues.apache.org/jira/browse/FLINK-18313
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Zhijiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Need to update the doc to mention:
> # Views created in Flink cannot be queires in Hive
> # ALTER VIEW is not supported in SQL Client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18313) Update Hive dialect doc about VIEW

2020-06-22 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-18313:


Assignee: Rui Li

> Update Hive dialect doc about VIEW
> --
>
> Key: FLINK-18313
> URL: https://issues.apache.org/jira/browse/FLINK-18313
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / Hive, Documentation
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Need to update the doc to mention:
> # Views created in Flink cannot be queires in Hive
> # ALTER VIEW is not supported in SQL Client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18412) JdbcFullTest failed to compile on JDK11

2020-06-22 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18412:

Affects Version/s: 1.12.0

> JdbcFullTest failed to compile on JDK11
> ---
>
> Key: FLINK-18412
> URL: https://issues.apache.org/jira/browse/FLINK-18412
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> master: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3928=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> release-1.11: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> 2020-06-22T20:19:50.2157534Z [INFO] 
> -
> 2020-06-22T20:19:50.2158031Z [ERROR] COMPILATION ERROR : 
> 2020-06-22T20:19:50.2158826Z [INFO] 
> -
> 2020-06-22T20:19:50.2159987Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[137,51]
>  cannot find symbol
> 2020-06-22T20:19:50.2160676Z   symbol:   variable f1
> 2020-06-22T20:19:50.2161236Z   location: variable tuple2 of type 
> java.lang.Object
> 2020-06-22T20:19:50.2163372Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[136,33]
>  incompatible types: cannot infer functional interface descriptor for 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.StatementExecutorFactory
> 2020-06-22T20:19:50.2164788Z [INFO] 2 errors 
> 2020-06-22T20:19:50.2165569Z [INFO] 
> -
> 2020-06-22T20:19:50.2166430Z [INFO] 
> 
> 2020-06-22T20:19:50.2167374Z [INFO] Reactor Summary:
> 2020-06-22T20:19:50.2167713Z [INFO] 
> 2020-06-22T20:19:50.2168486Z [INFO] force-shading 
> .. SUCCESS [  5.905 s]
> 2020-06-22T20:19:50.2169067Z [INFO] flink 
> .. SUCCESS [ 10.173 s]
> 2020-06-22T20:19:50.2169978Z [INFO] flink-annotations 
> .. SUCCESS [  1.637 s]
> 2020-06-22T20:19:50.2170980Z [INFO] flink-test-utils-parent 
>  SUCCESS [  0.117 s]
> 2020-06-22T20:19:50.2171877Z [INFO] flink-test-utils-junit 
> . SUCCESS [  1.224 s]
> 2020-06-22T20:19:50.2172896Z [INFO] flink-metrics 
> .. SUCCESS [  0.101 s]
> 2020-06-22T20:19:50.2173788Z [INFO] flink-metrics-core 
> . SUCCESS [  1.726 s]
> 2020-06-22T20:19:50.2175058Z [INFO] flink-core 
> . SUCCESS [ 29.372 s]
> 2020-06-22T20:19:50.2175982Z [INFO] flink-java 
> . SUCCESS [  5.577 s]
> 2020-06-22T20:19:50.2176868Z [INFO] flink-queryable-state 
> .. SUCCESS [  0.085 s]
> 2020-06-22T20:19:50.2177760Z [INFO] flink-queryable-state-client-java 
> .. SUCCESS [  1.619 s]
> 2020-06-22T20:19:50.2178600Z [INFO] flink-filesystems 
> .. SUCCESS [  0.105 s]
> 2020-06-22T20:19:50.2179500Z [INFO] flink-hadoop-fs 
>  SUCCESS [ 20.792 s]
> 2020-06-22T20:19:50.2180402Z [INFO] flink-runtime 
> .. SUCCESS [01:51 min]
> 2020-06-22T20:19:50.2181462Z [INFO] flink-scala 
>  SUCCESS [ 36.797 s]
> 2020-06-22T20:19:50.2182326Z [INFO] flink-mapr-fs 
> .. SUCCESS [  0.848 s]
> 2020-06-22T20:19:50.2183372Z [INFO] flink-filesystems :: 
> flink-fs-hadoop-shaded  SUCCESS [  4.422 s]
> 2020-06-22T20:19:50.2184407Z [INFO] flink-s3-fs-base 
> ... SUCCESS [  2.085 s]
> 2020-06-22T20:19:50.2185259Z [INFO] flink-s3-fs-hadoop 
> . SUCCESS [  6.051 s]
> 2020-06-22T20:19:50.2186131Z [INFO] flink-s3-fs-presto 
> . SUCCESS [ 10.325 s]
> 2020-06-22T20:19:50.2186990Z [INFO] flink-swift-fs-hadoop 
> .. SUCCESS [ 22.021 s]
> 2020-06-22T20:19:50.2187820Z [INFO] flink-oss-fs-hadoop 
>  SUCCESS [  6.407 s]
> 2020-06-22T20:19:50.2188686Z 

[GitHub] [flink] wangyang0918 commented on a change in pull request #12054: [FLINK-17579] Allow user to set the prefix of TaskManager's ResourceID in standalone mode

2020-06-22 Thread GitBox


wangyang0918 commented on a change in pull request #12054:
URL: https://github.com/apache/flink/pull/12054#discussion_r443927993



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerRunner.java
##
@@ -458,4 +462,11 @@ private static String 
determineTaskManagerBindAddressByConnectingToResourceManag
HostBindPolicy bindPolicy = 
HostBindPolicy.fromString(configuration.getString(TaskManagerOptions.HOST_BIND_POLICY));
return bindPolicy == HostBindPolicy.IP ? 
taskManagerAddress.getHostAddress() : taskManagerAddress.getHostName();
}
+
+   @VisibleForTesting
+   static ResourceID getTaskManagerResourceID(Configuration config) throws 
Exception {
+   final String resourceID = 
config.get(TaskManagerOptions.TASK_MANAGER_RESOURCE_ID);
+   return resourceID != null ?
+   new ResourceID(resourceID) : new 
ResourceID(InetAddress.getLocalHost().getHostName() + "-" + new 
AbstractID().toString().substring(0, 6));

Review comment:
   Well my only concern is whether it is safe to put some special 
characters(e.g. `:`) in the `ResourceID`. I am not aware of this could happen.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17688) Support consuming Kinesis' enhanced fanout for flink-connector-kinesis

2020-06-22 Thread roland (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142556#comment-17142556
 ] 

roland commented on FLINK-17688:


Thanks, I'll read the FLIP first.

> Support consuming Kinesis' enhanced fanout for flink-connector-kinesis
> --
>
> Key: FLINK-17688
> URL: https://issues.apache.org/jira/browse/FLINK-17688
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kinesis
>Affects Versions: 1.10.1
>Reporter: roland
>Assignee: Danny Cranmer
>Priority: Major
>
> AWS Kinesis enhanced fanout is a feature that allows consumers to consume a 
> separated sub-stream without competing with other consumers. 
> ([https://docs.aws.amazon.com/streams/latest/dev/building-enhanced-consumers-api.html])
> Yet, currently flink-connector-kinesis can only consume main streams, which 
> may lead to tense competition.
> A support for the enhanced fanout feature is quite useful for AWS users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17845) Can't remove a table connector property with ALTER TABLE

2020-06-22 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142551#comment-17142551
 ] 

Rui Li commented on FLINK-17845:


[~jark] You're right that Hive supports replacing columns, but it doesn't 
support replacing properties which I believe is also a valid requirement.

bq. I think this is usefule to reminder users there is some typo configurations.

I thought users can add arbitrary properties for a table, no?

> Can't remove a table connector property with ALTER TABLE
> 
>
> Key: FLINK-17845
> URL: https://issues.apache.org/jira/browse/FLINK-17845
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Fabian Hueske
>Priority: Major
>
> It is not possible to remove an existing table property from a table.
> Looking at the [source 
> code|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/sqlexec/SqlToOperationConverter.java#L295]
>  this seems to be the intended semantics, but it seems counter-intuitive to 
> me.
> If I create a table with the following statement:
> {code}
> CREATE TABLE `testTable` (
>   id INT
> )
> WITH (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topicX' = 'test',  -- Woops, I made a typo here
> [...]
> )
> {code}
> The statement will be successfully executed. However, the table cannot be 
> used due to the typo.
> Fixing the typo with the following DDL is not possible:
> {code}
> ALTER TABLE `testTable` SET (
> 'connector.type' = 'kafka',
> 'connector.version' = 'universal',
> 'connector.topic' = 'test',  -- Fixing the typo
> )
> {code}
> because the key {{connector.topicX}} is not removed.
> Right now it seems that the only way to fix a table with an invalid key is to 
> DROP and CREATE it. I think that this use case should be supported by ALTER 
> TABLE.
> I would even argue that the expected behavior is that previous properties are 
> removed and replaced by the new properties.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18412) JdbcFullTest failed to compile on JDK11

2020-06-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142554#comment-17142554
 ] 

Dian Fu commented on FLINK-18412:
-

cc [~fsk119]

> JdbcFullTest failed to compile on JDK11
> ---
>
> Key: FLINK-18412
> URL: https://issues.apache.org/jira/browse/FLINK-18412
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> master: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3928=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> release-1.11: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> 2020-06-22T20:19:50.2157534Z [INFO] 
> -
> 2020-06-22T20:19:50.2158031Z [ERROR] COMPILATION ERROR : 
> 2020-06-22T20:19:50.2158826Z [INFO] 
> -
> 2020-06-22T20:19:50.2159987Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[137,51]
>  cannot find symbol
> 2020-06-22T20:19:50.2160676Z   symbol:   variable f1
> 2020-06-22T20:19:50.2161236Z   location: variable tuple2 of type 
> java.lang.Object
> 2020-06-22T20:19:50.2163372Z [ERROR] 
> /__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[136,33]
>  incompatible types: cannot infer functional interface descriptor for 
> org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.StatementExecutorFactory
> 2020-06-22T20:19:50.2164788Z [INFO] 2 errors 
> 2020-06-22T20:19:50.2165569Z [INFO] 
> -
> 2020-06-22T20:19:50.2166430Z [INFO] 
> 
> 2020-06-22T20:19:50.2167374Z [INFO] Reactor Summary:
> 2020-06-22T20:19:50.2167713Z [INFO] 
> 2020-06-22T20:19:50.2168486Z [INFO] force-shading 
> .. SUCCESS [  5.905 s]
> 2020-06-22T20:19:50.2169067Z [INFO] flink 
> .. SUCCESS [ 10.173 s]
> 2020-06-22T20:19:50.2169978Z [INFO] flink-annotations 
> .. SUCCESS [  1.637 s]
> 2020-06-22T20:19:50.2170980Z [INFO] flink-test-utils-parent 
>  SUCCESS [  0.117 s]
> 2020-06-22T20:19:50.2171877Z [INFO] flink-test-utils-junit 
> . SUCCESS [  1.224 s]
> 2020-06-22T20:19:50.2172896Z [INFO] flink-metrics 
> .. SUCCESS [  0.101 s]
> 2020-06-22T20:19:50.2173788Z [INFO] flink-metrics-core 
> . SUCCESS [  1.726 s]
> 2020-06-22T20:19:50.2175058Z [INFO] flink-core 
> . SUCCESS [ 29.372 s]
> 2020-06-22T20:19:50.2175982Z [INFO] flink-java 
> . SUCCESS [  5.577 s]
> 2020-06-22T20:19:50.2176868Z [INFO] flink-queryable-state 
> .. SUCCESS [  0.085 s]
> 2020-06-22T20:19:50.2177760Z [INFO] flink-queryable-state-client-java 
> .. SUCCESS [  1.619 s]
> 2020-06-22T20:19:50.2178600Z [INFO] flink-filesystems 
> .. SUCCESS [  0.105 s]
> 2020-06-22T20:19:50.2179500Z [INFO] flink-hadoop-fs 
>  SUCCESS [ 20.792 s]
> 2020-06-22T20:19:50.2180402Z [INFO] flink-runtime 
> .. SUCCESS [01:51 min]
> 2020-06-22T20:19:50.2181462Z [INFO] flink-scala 
>  SUCCESS [ 36.797 s]
> 2020-06-22T20:19:50.2182326Z [INFO] flink-mapr-fs 
> .. SUCCESS [  0.848 s]
> 2020-06-22T20:19:50.2183372Z [INFO] flink-filesystems :: 
> flink-fs-hadoop-shaded  SUCCESS [  4.422 s]
> 2020-06-22T20:19:50.2184407Z [INFO] flink-s3-fs-base 
> ... SUCCESS [  2.085 s]
> 2020-06-22T20:19:50.2185259Z [INFO] flink-s3-fs-hadoop 
> . SUCCESS [  6.051 s]
> 2020-06-22T20:19:50.2186131Z [INFO] flink-s3-fs-presto 
> . SUCCESS [ 10.325 s]
> 2020-06-22T20:19:50.2186990Z [INFO] flink-swift-fs-hadoop 
> .. SUCCESS [ 22.021 s]
> 2020-06-22T20:19:50.2187820Z [INFO] flink-oss-fs-hadoop 
>  SUCCESS [  6.407 s]
> 

[jira] [Commented] (FLINK-18335) NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted time outs

2020-06-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142549#comment-17142549
 ] 

Dian Fu commented on FLINK-18335:
-

Another instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3928=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=baf26b34-3c6a-54e8-f93f-cf269b32f802

>  NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted time outs
> 
>
> Key: FLINK-18335
> URL: https://issues.apache.org/jira/browse/FLINK-18335
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Yun Tang
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3582=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {noformat}
> [ERROR] Errors: 
> [ERROR]   
> NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted:182->verifyAllOperatorsNotifyAborted:195->Object.wait:502->Object.wait:-2
>  » TestTimedOut
> {noformat}
> CC [~yunta]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18412) JdbcFullTest failed to compile on JDK11

2020-06-22 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18412:

Description: 
master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3928=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
release-1.11: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=946871de-358d-5815-3994-8175615bc253
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3929=logs=6caf31d6-847a-526e-9624-468e053467d6

{code}
2020-06-22T20:19:50.2157534Z [INFO] 
-
2020-06-22T20:19:50.2158031Z [ERROR] COMPILATION ERROR : 
2020-06-22T20:19:50.2158826Z [INFO] 
-
2020-06-22T20:19:50.2159987Z [ERROR] 
/__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[137,51]
 cannot find symbol
2020-06-22T20:19:50.2160676Z   symbol:   variable f1
2020-06-22T20:19:50.2161236Z   location: variable tuple2 of type 
java.lang.Object
2020-06-22T20:19:50.2163372Z [ERROR] 
/__w/2/s/flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/internal/JdbcFullTest.java:[136,33]
 incompatible types: cannot infer functional interface descriptor for 
org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.StatementExecutorFactory
2020-06-22T20:19:50.2164788Z [INFO] 2 errors 
2020-06-22T20:19:50.2165569Z [INFO] 
-
2020-06-22T20:19:50.2166430Z [INFO] 

2020-06-22T20:19:50.2167374Z [INFO] Reactor Summary:
2020-06-22T20:19:50.2167713Z [INFO] 
2020-06-22T20:19:50.2168486Z [INFO] force-shading 
.. SUCCESS [  5.905 s]
2020-06-22T20:19:50.2169067Z [INFO] flink 
.. SUCCESS [ 10.173 s]
2020-06-22T20:19:50.2169978Z [INFO] flink-annotations 
.. SUCCESS [  1.637 s]
2020-06-22T20:19:50.2170980Z [INFO] flink-test-utils-parent 
 SUCCESS [  0.117 s]
2020-06-22T20:19:50.2171877Z [INFO] flink-test-utils-junit 
. SUCCESS [  1.224 s]
2020-06-22T20:19:50.2172896Z [INFO] flink-metrics 
.. SUCCESS [  0.101 s]
2020-06-22T20:19:50.2173788Z [INFO] flink-metrics-core 
. SUCCESS [  1.726 s]
2020-06-22T20:19:50.2175058Z [INFO] flink-core 
. SUCCESS [ 29.372 s]
2020-06-22T20:19:50.2175982Z [INFO] flink-java 
. SUCCESS [  5.577 s]
2020-06-22T20:19:50.2176868Z [INFO] flink-queryable-state 
.. SUCCESS [  0.085 s]
2020-06-22T20:19:50.2177760Z [INFO] flink-queryable-state-client-java 
.. SUCCESS [  1.619 s]
2020-06-22T20:19:50.2178600Z [INFO] flink-filesystems 
.. SUCCESS [  0.105 s]
2020-06-22T20:19:50.2179500Z [INFO] flink-hadoop-fs 
 SUCCESS [ 20.792 s]
2020-06-22T20:19:50.2180402Z [INFO] flink-runtime 
.. SUCCESS [01:51 min]
2020-06-22T20:19:50.2181462Z [INFO] flink-scala 
 SUCCESS [ 36.797 s]
2020-06-22T20:19:50.2182326Z [INFO] flink-mapr-fs 
.. SUCCESS [  0.848 s]
2020-06-22T20:19:50.2183372Z [INFO] flink-filesystems :: flink-fs-hadoop-shaded 
 SUCCESS [  4.422 s]
2020-06-22T20:19:50.2184407Z [INFO] flink-s3-fs-base 
... SUCCESS [  2.085 s]
2020-06-22T20:19:50.2185259Z [INFO] flink-s3-fs-hadoop 
. SUCCESS [  6.051 s]
2020-06-22T20:19:50.2186131Z [INFO] flink-s3-fs-presto 
. SUCCESS [ 10.325 s]
2020-06-22T20:19:50.2186990Z [INFO] flink-swift-fs-hadoop 
.. SUCCESS [ 22.021 s]
2020-06-22T20:19:50.2187820Z [INFO] flink-oss-fs-hadoop 
 SUCCESS [  6.407 s]
2020-06-22T20:19:50.2188686Z [INFO] flink-azure-fs-hadoop 
.. SUCCESS [  8.868 s]
2020-06-22T20:19:50.2189526Z [INFO] flink-optimizer 
 SUCCESS [ 10.922 s]
2020-06-22T20:19:50.2190385Z [INFO] flink-streaming-java 
... SUCCESS [ 14.119 s]
2020-06-22T20:19:50.2191563Z [INFO] flink-clients 
.. SUCCESS [  2.558 s]
2020-06-22T20:19:50.2192425Z [INFO] flink-test-utils 
... SUCCESS [  1.837 s]
2020-06-22T20:19:50.2193609Z [INFO] flink-runtime-web 
.. SUCCESS [02:01 min]

  1   2   3   4   5   >