[GitHub] [flink] flinkbot edited a comment on issue #10741: [FLINK-15443][jdbc] Fix mismatch between Java float and JDBC float

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10741: [FLINK-15443][jdbc] Fix mismatch 
between Java float and JDBC float
URL: https://github.com/apache/flink/pull/10741#issuecomment-570129932
 
 
   
   ## CI report:
   
   * b1d08ddf274f9d0f101708d896fc5ecd4bd78e05 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142840208) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10742: [FLINK-15442] Harden the Avro Confluent Schema Registry nightly end-to-end test

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10742: [FLINK-15442] Harden the Avro 
Confluent Schema Registry nightly end-to-end test
URL: https://github.com/apache/flink/pull/10742#issuecomment-570129960
 
 
   
   ## CI report:
   
   * b54283493f1ab8a44b5c4387012adf0524a54267 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142840214) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4035)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10743: Merge pull request #1 from apache/master

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10743: Merge pull request #1 from 
apache/master
URL: https://github.com/apache/flink/pull/10743#issuecomment-570134251
 
 
   
   ## CI report:
   
   * 417781c872725986e79b2392cf6290ece2a6e3fe Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142841675) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl docs

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl 
docs
URL: https://github.com/apache/flink/pull/10732#issuecomment-569896244
 
 
   
   ## CI report:
   
   * 6fd808adc772aa42c3ed9968bbd9593c28940539 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732194) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4014)
 
   * b5c952a2c63c55776a601d3ddf9d5b703784949e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142835136) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15448) Make "ResourceID#toString" more descriptive

2020-01-01 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006647#comment-17006647
 ] 

Xintong Song commented on FLINK-15448:
--

I wonder how many cases do we have that we want to log TMs' host information 
together with ResourceID while the host information is not available. I believe 
for most cases, if not all, such information can be easily get. Take your 
example, you can get the host information in 'notifyHeartbeatTimeout'  by 
simply calling 'registeredTaskManagers.get(resourceID).f0.getHostname()'.

It's true that initializing ResourceID with the host information on Yarn is an 
easier approach. But this approach also brings the backwards compatibility 
concern. There might be users who already built their own monitoring / 
analyzing systems, which may depend on the consistency between TM ResourceIDs 
in Flink logs / metrics and container IDs in Yarn logs / metrics. If we change 
the convention how TM ResourceID is generated, these users will have to change 
their systems as well if they want to upgrade to new Flink versions. I'm not 
sure the convenience brought by choosing to change ResourceID over providing 
host information at the logging places would worth breaking such backwards 
compatibility.

Some additional information, logging the TM host information is not always 
helpful. For containerized scenarios such as K8s, where each container will 
have its own IP address, the TM hostname does not reflect on which machine the 
container is launched. Even for yarn, hadoop 3.x already supports containerized 
application and there are already 
[discussions|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Building-with-Hadoop-3-td31395.html]
 regarding flink hadoop 3.x supporting in the community.

> Make "ResourceID#toString" more descriptive
> ---
>
> Key: FLINK-15448
> URL: https://issues.apache.org/jira/browse/FLINK-15448
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>
> With Flink on Yarn, sometimes we ran into an exception like this:
> {code:java}
> java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id 
> container_  timed out.
> {code}
> We'd like to find out the host of the lost TaskManager to log into it for 
> more details, we have to check the previous logs for the host information, 
> which is a little time-consuming.
> Maybe we can add more descriptive information to ResourceID of Yarn 
> containers, e.g. "container_xxx@host_name:port_number".
> Here's the demo:
> {code:java}
> class ResourceID {
>   final String resourceId;
>   final String details;
>   public ResourceID(String resourceId) {
> this.resourceId = resourceId;
> this.details = resourceId;
>   }
>   public ResourceID(String resourceId, String details) {
> this.resourceId = resourceId;
> this.details = details;
>   }
>   public String toString() {
> return details;
>   } 
> }
> // in flink-yarn
> private void startTaskExecutorInContainer(Container container) {
>   final String containerIdStr = container.getId().toString();
>   final String containerDetail = container.getId() + "@" + 
> container.getNodeId();  
>   final ResourceID resourceId = new ResourceID(containerIdStr, 
> containerDetail);
>   ...
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10700: [FLINK-15383][table sql / planner & legacy planner] Using sink schema field name instead of query schema field name for UpsertStreamTableSin

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10700: [FLINK-15383][table sql / planner & 
legacy planner] Using sink schema field name instead of query schema field name 
for UpsertStreamTableSink.
URL: https://github.com/apache/flink/pull/10700#issuecomment-569055468
 
 
   
   ## CI report:
   
   * 4e596c6982db0ff2416530e688d7f6d45329f465 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142375475) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3938)
 
   * bbb633a82d437b51b1551d550dc05d3bafcecab0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142614951) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3974)
 
   * 9e65c9bd9f94a9741a8967f5ed7f8b926655dd26 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142644145) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3986)
 
   * 721deb991f908eb91ba6dc0622584f1ea76d45dc Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/142646604) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3988)
 
   * f54bb5b85d434c034686114b3b50655c062b340a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142648987) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3989)
 
   * 8cf3dd6ab802dcf2717c7495b41fa52861e39dae Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142727386) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4006)
 
   * 6984fe71af335b265abaaae3fa524be760e02563 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15453) Remove unneeded HiveShim methods

2020-01-01 Thread Rui Li (Jira)
Rui Li created FLINK-15453:
--

 Summary: Remove unneeded HiveShim methods
 Key: FLINK-15453
 URL: https://issues.apache.org/jira/browse/FLINK-15453
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362390104
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for rowtime attributes which are out-of-order by 
a bounded time interval.
 
 Review comment:
   The difference is that `ascending timestamp` must use `INTERVAL '0.001' 
SECOND`. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362389973
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
 
 Review comment:
   This is the example. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362389797
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -371,9 +335,23 @@ schema:
 
 
 
-For *each field*, the following properties can be declared in addition to the 
column's name and type:
+In order to declare time attributes in the schema, the following ways are 
supported:
 
 
+
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 AS PROCTIME(), -- declares this field as a processing-time attribute
+  MyField2 TIMESTAMP(3),
+  MyField3 BOOLEAN,
 
 Review comment:
   Yes, we can. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10700: [FLINK-15383][table sql / planner & legacy planner] Using sink schema field name instead of query schema field name for UpsertStreamTableSink.

2020-01-01 Thread GitBox
wuchong commented on issue #10700: [FLINK-15383][table sql / planner & legacy 
planner] Using sink schema field name instead of query schema field name for 
UpsertStreamTableSink.
URL: https://github.com/apache/flink/pull/10700#issuecomment-570136912
 
 
   Waiting for travis. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362389137
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
 
 Review comment:
   Can we add a example for how to write `watermarksPeriodicAscending`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362389061
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for rowtime attributes which are out-of-order by 
a bounded time interval.
 
 Review comment:
   It is same to above example... Why different description?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15411) Planner can't prune partition on DATE/TIMESTAMP columns

2020-01-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-15411:

Fix Version/s: (was: 1.11.0)

> Planner can't prune partition on DATE/TIMESTAMP columns
> ---
>
> Key: FLINK-15411
> URL: https://issues.apache.org/jira/browse/FLINK-15411
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive should work after planner fixed due to: 
> [https://github.com/apache/flink/pull/10690#issuecomment-569021089]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15411) Planner can't prune partition on DATE/TIMESTAMP columns

2020-01-01 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15411.
-
Resolution: Fixed

[FLINK-15411][hive] Fix listPartitions in HiveCatalog & Add time prune 
partition tests
 - 1.11.0: dc0cf3be354019f9f017d73eda08e4129a962209
 - 1.10.0: 3ee517e262c51865d07a0cd8ed651c2ffb1a428b

[FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP 
columns
 - 1.11.0: eadbd4c4c289497de6b38672d095dd1d0819e1b2
 - 1.10.0: f3c829c6e36aeda8e35ff3f54302d87d87f3361b

[hotfix][table-planner-blink] Fix comparison of timestamp with local time zone
 - 1.11.0: e1e040df9df2735dfdf22711760c7fa426dbdb77
 - 1.10.0: 01587e90ee16ff0829d93597ca950ae90d28fbef


> Planner can't prune partition on DATE/TIMESTAMP columns
> ---
>
> Key: FLINK-15411
> URL: https://issues.apache.org/jira/browse/FLINK-15411
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Jingsong Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Hive should work after planner fixed due to: 
> [https://github.com/apache/flink/pull/10690#issuecomment-569021089]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on issue #10721: [FLINK-15429][hive] HiveObjectConversion implementations need to hand…

2020-01-01 Thread GitBox
lirui-apache commented on issue #10721: [FLINK-15429][hive] 
HiveObjectConversion implementations need to hand…
URL: https://github.com/apache/flink/pull/10721#issuecomment-570136160
 
 
   @bowenli86 Just rebased and fixed conflicts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362388431
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -426,9 +404,32 @@ For more information about time handling in Flink and 
especially event-time, we
 
 In order to control the event-time behavior for tables, Flink provides 
predefined timestamp extractors and watermark strategies.
 
+Please refer to [CREATE TABLE statements](sql/create.html#create-table) for 
more information about defining time attributes in DDL.
+
 The following timestamp extractors are supported:
 
 
+
+{% highlight sql %}
+-- use the existing TIMESTAMP(3) field in schema as the rowtime attribute
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ...
+) WITH (
+  ...
+)
+
 
 Review comment:
   Can we add a comment to tell user? Because the DDL is the first tab now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362388249
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -164,21 +164,8 @@ CREATE TABLE MyUserTable (
   'connector.properties.zookeeper.connect' = 'localhost:2181',
   'connector.properties.bootstrap.servers' = 'localhost:9092',
 
-  -- specify the update-mode for streaming tables
 
 Review comment:
   And above "For streaming queries, an update mode declares how to communicate 
between a dynamic table and the storage system for continuous queries."
   need remove or not?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2020-01-01 Thread GitBox
wuchong merged pull request #10704: [FLINK-15411][table-planner-blink] Fix 
prune partition on DATE/TIME/TIMESTAMP columns
URL: https://github.com/apache/flink/pull/10704
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362388064
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -164,21 +164,8 @@ CREATE TABLE MyUserTable (
   'connector.properties.zookeeper.connect' = 'localhost:2181',
   'connector.properties.bootstrap.servers' = 'localhost:9092',
 
-  -- specify the update-mode for streaming tables
 
 Review comment:
   Change above "The connector might already provide a fixed format with fields 
and schema." to "The connector might already provide a fixed format."?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362387611
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -164,21 +164,8 @@ CREATE TABLE MyUserTable (
   'connector.properties.zookeeper.connect' = 'localhost:2181',
   'connector.properties.bootstrap.servers' = 'localhost:9092',
 
 Review comment:
   You need modify above "avro".


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10732: [FLINK-14980][docs] add function ddl docs

2020-01-01 Thread GitBox
bowenli86 commented on issue #10732: [FLINK-14980][docs] add function ddl docs
URL: https://github.com/apache/flink/pull/10732#issuecomment-570134836
 
 
   @flinkbot run azure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10742: [FLINK-15442] Harden the Avro Confluent Schema Registry nightly end-to-end test

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10742: [FLINK-15442] Harden the Avro 
Confluent Schema Registry nightly end-to-end test
URL: https://github.com/apache/flink/pull/10742#issuecomment-570129960
 
 
   
   ## CI report:
   
   * b54283493f1ab8a44b5c4387012adf0524a54267 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142840214) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4035)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10741: [FLINK-15443][jdbc] Fix mismatch between Java float and JDBC float

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10741: [FLINK-15443][jdbc] Fix mismatch 
between Java float and JDBC float
URL: https://github.com/apache/flink/pull/10741#issuecomment-570129932
 
 
   
   ## CI report:
   
   * b1d08ddf274f9d0f101708d896fc5ecd4bd78e05 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142840208) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10743: Merge pull request #1 from apache/master

2020-01-01 Thread GitBox
flinkbot commented on issue #10743: Merge pull request #1 from apache/master
URL: https://github.com/apache/flink/pull/10743#issuecomment-570134251
 
 
   
   ## CI report:
   
   * 417781c872725986e79b2392cf6290ece2a6e3fe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362384327
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -371,9 +335,23 @@ schema:
 
 
 
-For *each field*, the following properties can be declared in addition to the 
column's name and type:
+In order to declare time attributes in the schema, the following ways are 
supported:
 
 
+
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 AS PROCTIME(), -- declares this field as a processing-time attribute
+  MyField2 TIMESTAMP(3),
+  MyField3 BOOLEAN,
 
 Review comment:
   Can we do this rename by compute column?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leonardBang commented on issue #10700: [FLINK-15383][table sql / planner & legacy planner] Using sink schema field name instead of query schema field name for UpsertStreamTableSink.

2020-01-01 Thread GitBox
leonardBang commented on issue #10700: [FLINK-15383][table sql / planner & 
legacy planner] Using sink schema field name instead of query schema field name 
for UpsertStreamTableSink.
URL: https://github.com/apache/flink/pull/10700#issuecomment-570131309
 
 
   @wuchong Could you have a more look? thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10743: Merge pull request #1 from apache/master

2020-01-01 Thread GitBox
flinkbot commented on issue #10743: Merge pull request #1 from apache/master
URL: https://github.com/apache/flink/pull/10743#issuecomment-570131088
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 417781c872725986e79b2392cf6290ece2a6e3fe (Thu Jan 02 
07:00:11 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] waywtd opened a new pull request #10743: Merge pull request #1 from apache/master

2020-01-01 Thread GitBox
waywtd opened a new pull request #10743: Merge pull request #1 from 
apache/master
URL: https://github.com/apache/flink/pull/10743
 
 
   master merge
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10741: [FLINK-15443][jdbc] Fix mismatch between Java float and JDBC float

2020-01-01 Thread GitBox
flinkbot commented on issue #10741: [FLINK-15443][jdbc] Fix mismatch between 
Java float and JDBC float
URL: https://github.com/apache/flink/pull/10741#issuecomment-570129932
 
 
   
   ## CI report:
   
   * b1d08ddf274f9d0f101708d896fc5ecd4bd78e05 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10742: [FLINK-15442] Harden the Avro Confluent Schema Registry nightly end-to-end test

2020-01-01 Thread GitBox
flinkbot commented on issue #10742: [FLINK-15442] Harden the Avro Confluent 
Schema Registry nightly end-to-end test
URL: https://github.com/apache/flink/pull/10742#issuecomment-570129960
 
 
   
   ## CI report:
   
   * b54283493f1ab8a44b5c4387012adf0524a54267 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl docs

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl 
docs
URL: https://github.com/apache/flink/pull/10732#issuecomment-569896244
 
 
   
   ## CI report:
   
   * 6fd808adc772aa42c3ed9968bbd9593c28940539 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732194) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4014)
 
   * b5c952a2c63c55776a601d3ddf9d5b703784949e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142835136) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] beyond1920 commented on issue #10694: [FLINK-15381] [table-planner-blink] correct collation derive logic on RelSubset in RelMdCollation

2020-01-01 Thread GitBox
beyond1920 commented on issue #10694: [FLINK-15381] [table-planner-blink] 
correct collation derive logic on RelSubset in RelMdCollation
URL: https://github.com/apache/flink/pull/10694#issuecomment-570129426
 
 
   LGTM


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15259) HiveInspector.toInspectors() should convert Flink constant to Hive constant

2020-01-01 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006614#comment-17006614
 ] 

Rui Li commented on FLINK-15259:


[~phoenixjiangnan] Sure I'll open a PR for it.

> HiveInspector.toInspectors() should convert Flink constant to Hive constant 
> 
>
> Key: FLINK-15259
> URL: https://issues.apache.org/jira/browse/FLINK-15259
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> repro test: 
> {code:java}
> public class HiveModuleITCase {
>   @Test
>   public void test() {
>   TableEnvironment tEnv = 
> HiveTestUtils.createTableEnvWithBlinkPlannerBatchMode();
>   tEnv.unloadModule("core");
>   tEnv.loadModule("hive", new HiveModule("2.3.4"));
>   tEnv.sqlQuery("select concat('an', 'bn')");
>   }
> }
> {code}
> seems that currently HiveInspector.toInspectors() didn't convert Flink 
> constant to Hive constant before calling 
> hiveShim.getObjectInspectorForConstant
> I don't think it's a blocker



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on issue #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2020-01-01 Thread GitBox
wuchong commented on issue #10704: [FLINK-15411][table-planner-blink] Fix prune 
partition on DATE/TIME/TIMESTAMP columns
URL: https://github.com/apache/flink/pull/10704#issuecomment-570128751
 
 
   Merging...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362380491
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for rowtime attributes which are out-of-order by 
a bounded time interval.
 
 Review comment:
   We don't have a shortcut API for this. 
   All the watermark strategies are expressed by expressions, we didn't provide 
built-in strategy functions. So the `asceding timestamp` is a special case of 
`out-of-order timestamp` which has a constant 1 ms delay.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362380166
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
 
 Review comment:
   We don't have a shortcut API for `watermarksPeriodicAscending` in SQL DDL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362379991
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -371,9 +335,23 @@ schema:
 
 
 
-For *each field*, the following properties can be declared in addition to the 
column's name and type:
+In order to declare time attributes in the schema, the following ways are 
supported:
 
 
+
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 AS PROCTIME(), -- declares this field as a processing-time attribute
+  MyField2 TIMESTAMP(3),
+  MyField3 BOOLEAN,
 
 Review comment:
   There is a little differentce between using computed column and 
`Schema#from()`. Using `Schema#from()`, users can't  reference the original 
field name.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362379687
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -164,21 +164,8 @@ CREATE TABLE MyUserTable (
   'connector.properties.zookeeper.connect' = 'localhost:2181',
   'connector.properties.bootstrap.servers' = 'localhost:9092',
 
-  -- specify the update-mode for streaming tables
-  'update-mode' = 'append',
-
   -- declare a format for this system
-  'format.type' = 'avro',
 
 Review comment:
   No, we don't support yet. That's why I change the example to json which is 
simpler. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362380009
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -426,9 +404,32 @@ For more information about time handling in Flink and 
especially event-time, we
 
 In order to control the event-time behavior for tables, Flink provides 
predefined timestamp extractors and watermark strategies.
 
+Please refer to [CREATE TABLE statements](sql/create.html#create-table) for 
more information about defining time attributes in DDL.
+
 The following timestamp extractors are supported:
 
 
+
+{% highlight sql %}
+-- use the existing TIMESTAMP(3) field in schema as the rowtime attribute
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ...
+) WITH (
+  ...
+)
+
 
 Review comment:
   We don't support yet. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #10721: [FLINK-15429][hive] HiveObjectConversion implementations need to hand…

2020-01-01 Thread GitBox
bowenli86 commented on issue #10721: [FLINK-15429][hive] HiveObjectConversion 
implementations need to hand…
URL: https://github.com/apache/flink/pull/10721#issuecomment-570127766
 
 
   @lirui-apache LGTM. Can you rebase to latest master? seems there's some 
conflicts


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15450) Add kafka topic information to Kafka source name on Flink UI

2020-01-01 Thread Victor Wong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Victor Wong updated FLINK-15450:

Component/s: API / Core

> Add kafka topic information to Kafka source name on Flink UI
> 
>
> Key: FLINK-15450
> URL: https://issues.apache.org/jira/browse/FLINK-15450
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core, Connectors / Kafka
>Reporter: Victor Wong
>Priority: Major
>
> If the user did not specify a custom name to the source, e.g. Kafka source, 
> Flink would use the default name "Custom Source", which was not intuitive 
> (Sink was the same).
> {code:java}
> Source: Custom Source -> Filter -> Map -> Sink: Unnamed
> {code}
> If we could add the Kafka topic information to the default Source/Sink name, 
> it would be very helpful to catch the consuming/publishing topic quickly, 
> like this:
> {code:java}
> Source: srcTopic0, srcTopic1 -> Filter -> Map -> Sink: sinkTopic0, sinkTopic1
> {code}
> *Suggestion* (forgive me if it makes too many changes)
> 1. Add a `name` method to interface `Function`
> {code:java}
> public interface Function extends java.io.Serializable {
>   default String name() { return ""; }
> }
> {code}
> 2. Source/Sink/Other functions override this method depending on their needs.
> {code:java}
> class FlinkKafkaConsumerBase {
> String name() {
>   return this.topicsDescriptor.toString();
> }
> }
> {code}
> 3. Use Function#name if the returned value is not empty.
> {code:java}
> // StreamExecutionEnvironment
>   public  DataStreamSource addSource(SourceFunction 
> function) {
>   String sourceName = function.name();
>   if (StringUtils.isNullOrWhitespaceOnly(sourceName)) {
>   sourceName = "Custom Source";
>   }
>   return addSource(function, sourceName);
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15448) Make "ResourceID#toString" more descriptive

2020-01-01 Thread Victor Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006609#comment-17006609
 ] 

Victor Wong commented on FLINK-15448:
-

[~xintongsong], thanks for your reply, I think your concern is very reasonable, 
but I still have some questions.

_the right approach should be providing more information at the places where 
the logs are generated_
---
The information we need, like the host of TM, is not always available or 
convenient to access in some place. Take `HeartbeatListener` for example: 

{code:java}
// 
org.apache.flink.runtime.jobmaster.JobMaster.TaskManagerHeartbeatListener#notifyHeartbeatTimeout
public void notifyHeartbeatTimeout(ResourceID resourceID) {
validateRunsInMainThread();
  // *I think it's not easy to construct a correct log 
information here*.
disconnectTaskManager(
resourceID,
new TimeoutException("Heartbeat of TaskManager 
with id " + resourceID + " timed out."));
}
{code}

Besides, I think it's error-prone to keep in mind providing the exact needed 
information when logging.

What about initialize Yarn ResourceID with both container and host information, 
i.e. `new ResourceID(container.getId().toString() + "@" + 
container.getNodeId())`. Any suggestion?

> Make "ResourceID#toString" more descriptive
> ---
>
> Key: FLINK-15448
> URL: https://issues.apache.org/jira/browse/FLINK-15448
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.9.1
>Reporter: Victor Wong
>Priority: Major
>
> With Flink on Yarn, sometimes we ran into an exception like this:
> {code:java}
> java.util.concurrent.TimeoutException: The heartbeat of TaskManager with id 
> container_  timed out.
> {code}
> We'd like to find out the host of the lost TaskManager to log into it for 
> more details, we have to check the previous logs for the host information, 
> which is a little time-consuming.
> Maybe we can add more descriptive information to ResourceID of Yarn 
> containers, e.g. "container_xxx@host_name:port_number".
> Here's the demo:
> {code:java}
> class ResourceID {
>   final String resourceId;
>   final String details;
>   public ResourceID(String resourceId) {
> this.resourceId = resourceId;
> this.details = resourceId;
>   }
>   public ResourceID(String resourceId, String details) {
> this.resourceId = resourceId;
> this.details = details;
>   }
>   public String toString() {
> return details;
>   } 
> }
> // in flink-yarn
> private void startTaskExecutorInContainer(Container container) {
>   final String containerIdStr = container.getId().toString();
>   final String containerDetail = container.getId() + "@" + 
> container.getNodeId();  
>   final ResourceID resourceId = new ResourceID(containerIdStr, 
> containerDetail);
>   ...
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10740: [FLINK-15432][jdbc] Added flushInterval support for JDBCOutputFormat

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10740: [FLINK-15432][jdbc] Added 
flushInterval support for JDBCOutputFormat
URL: https://github.com/apache/flink/pull/10740#issuecomment-570113597
 
 
   
   ## CI report:
   
   * 0d69d547cbdb24d74d0a43653bdc43e7ee0c9c4e Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142833898) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10694: [FLINK-15381] [table-planner-blink] correct collation derive logic on RelSubset in RelMdCollation

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10694: [FLINK-15381] [table-planner-blink] 
correct collation derive logic on RelSubset in RelMdCollation
URL: https://github.com/apache/flink/pull/10694#issuecomment-569001563
 
 
   
   ## CI report:
   
   * e9b4471c51446e144d7071480515be858959a352 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142355379) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3925)
 
   * a37a0f82b4661edbaff69f873ef7c9a4cd460ed9 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142833888) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4032)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362376668
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -164,21 +164,8 @@ CREATE TABLE MyUserTable (
   'connector.properties.zookeeper.connect' = 'localhost:2181',
   'connector.properties.bootstrap.servers' = 'localhost:9092',
 
-  -- specify the update-mode for streaming tables
-  'update-mode' = 'append',
-
   -- declare a format for this system
-  'format.type' = 'avro',
 
 Review comment:
   Does Avro support derive schema from connector?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362377321
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
 
 Review comment:
   Why not same to Java/Scala?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362376988
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -371,9 +335,23 @@ schema:
 
 
 
-For *each field*, the following properties can be declared in addition to the 
column's name and type:
+In order to declare time attributes in the schema, the following ways are 
supported:
 
 
+
+{% highlight sql %}
+CREATE TABLE MyTable (
+  MyField1 AS PROCTIME(), -- declares this field as a processing-time attribute
+  MyField2 TIMESTAMP(3),
+  MyField3 BOOLEAN,
 
 Review comment:
   Looks like `MyField3` not same to python and Java/Scala.
   Can we do this rename by compute column?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362377424
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -500,6 +501,39 @@ rowtime:
 The following watermark strategies are supported:
 
 
+
+{% highlight sql %}
+-- Sets a watermark strategy for strictly ascending rowtime attributes. Emits 
a watermark of the
+-- maximum observed timestamp so far. Rows that have a timestamp smaller to 
the max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for ascending rowtime attributes. Emits a 
watermark of the maximum
+-- observed timestamp so far minus 1. Rows that have a timestamp equal to the 
max timestamp
+-- are not late.
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ts_field - INTERVAL '0.001' SECOND
+) WITH (
+  ...
+)
+
+-- Sets a watermark strategy for rowtime attributes which are out-of-order by 
a bounded time interval.
 
 Review comment:
   It is same to above? Why are out-of-order?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10733: [FLINK-15446][table][docs] Improve "Connect to External Systems" documentation page

2020-01-01 Thread GitBox
JingsongLi commented on a change in pull request #10733: 
[FLINK-15446][table][docs] Improve "Connect to External Systems" documentation 
page
URL: https://github.com/apache/flink/pull/10733#discussion_r362377157
 
 

 ##
 File path: docs/dev/table/connect.md
 ##
 @@ -426,9 +404,32 @@ For more information about time handling in Flink and 
especially event-time, we
 
 In order to control the event-time behavior for tables, Flink provides 
predefined timestamp extractors and watermark strategies.
 
+Please refer to [CREATE TABLE statements](sql/create.html#create-table) for 
more information about defining time attributes in DDL.
+
 The following timestamp extractors are supported:
 
 
+
+{% highlight sql %}
+-- use the existing TIMESTAMP(3) field in schema as the rowtime attribute
+CREATE TABLE MyTable (
+  ts_field TIMESTAMP(3),
+  WATERMARK FOR ts_field AS ...
+) WITH (
+  ...
+)
+
 
 Review comment:
   Do we support `timestampsFromSource`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12428) Translate the "Event Time" page into Chinese

2020-01-01 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006604#comment-17006604
 ] 

Yu Li commented on FLINK-12428:
---

Hi [~lyy900111], are you still actively working on this? If not, I plan to 
change the fix version to 1.11.0. Thanks.

> Translate the "Event Time" page into Chinese
> 
>
> Key: FLINK-12428
> URL: https://issues.apache.org/jira/browse/FLINK-12428
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.9.0
>Reporter: YangFei
>Assignee: jack
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> file locate  flink/docs/dev/event_time.zh.md
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/event_time.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10742: [FLINK-15442] Harden the Avro Confluent Schema Registry nightly end-to-end test

2020-01-01 Thread GitBox
flinkbot commented on issue #10742: [FLINK-15442] Harden the Avro Confluent 
Schema Registry nightly end-to-end test
URL: https://github.com/apache/flink/pull/10742#issuecomment-570125108
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b54283493f1ab8a44b5c4387012adf0524a54267 (Thu Jan 02 
06:04:49 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15442) Harden the Avro Confluent Schema Registry nightly end-to-end test

2020-01-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15442:
---
Labels: pull-request-available  (was: )

> Harden the Avro Confluent Schema Registry nightly end-to-end test
> -
>
> Key: FLINK-15442
> URL: https://issues.apache.org/jira/browse/FLINK-15442
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> We have already harden the Avro Confluent Schema Registry test in 
> [FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, 
> there are still some defects in current mechanism.
> * The loop variable _i_ is not safe, it could be modified by the *command*.
> * The process of downloading kafka 0.10 is not included in the scope of 
> retry_times . I think we need to include it to tolerent transient network 
> issue.
> We need to fix those issue to harden the Avro Confluent Schema Registry 
> nightly end-to-end test.
> cc: [~trohrmann] [~chesnay]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ opened a new pull request #10742: [FLINK-15442] Harden the Avro Confluent Schema Registry nightly end-to-end test

2020-01-01 Thread GitBox
KarmaGYZ opened a new pull request #10742: [FLINK-15442] Harden the Avro 
Confluent Schema Registry nightly end-to-end test
URL: https://github.com/apache/flink/pull/10742
 
 
   
   
   ## What is the purpose of the change
   
   Harden the Avro Confluent Schema Registry nightly end-to-end test.
   
   ## Brief change log
   
   - Wrap the error when execute cleanup command in 
`retry_times_with_backoff_and_cleanup` function.
   - Harden the way to loop in `retry_times_with_backoff_and_cleanup`
   
   
   ## Verifying this change
   
   Trigger the Avro Confluent Schema Registry nightly end-to-end test.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe edited a comment on issue #10694: [FLINK-15381] [table-planner-blink] correct collation derive logic on RelSubset in RelMdCollation

2020-01-01 Thread GitBox
godfreyhe edited a comment on issue #10694: [FLINK-15381] [table-planner-blink] 
correct collation derive logic on RelSubset in RelMdCollation
URL: https://github.com/apache/flink/pull/10694#issuecomment-570124600
 
 
   > I'm agreed with you.
   > However it's unsafe to throw exception in the else branch.
   
   `FlinkRelMdUniqueKeys` and `FlinkRelMdUniqueGroups` also throw exception for 
this case and there are many test cases will fail once CALCITE_1048 is fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on issue #10694: [FLINK-15381] [table-planner-blink] correct collation derive logic on RelSubset in RelMdCollation

2020-01-01 Thread GitBox
godfreyhe commented on issue #10694: [FLINK-15381] [table-planner-blink] 
correct collation derive logic on RelSubset in RelMdCollation
URL: https://github.com/apache/flink/pull/10694#issuecomment-570124600
 
 
   > exception
   
   `FlinkRelMdUniqueKeys` and `FlinkRelMdUniqueGroups` also throw exception for 
this case and there are many test cases will fail once CALCITE_1048 is fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10741: [FLINK-15443][jdbc] Fix mismatch between Java float and JDBC float

2020-01-01 Thread GitBox
flinkbot commented on issue #10741: [FLINK-15443][jdbc] Fix mismatch between 
Java float and JDBC float
URL: https://github.com/apache/flink/pull/10741#issuecomment-570124304
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b1d08ddf274f9d0f101708d896fc5ecd4bd78e05 (Thu Jan 02 
05:57:06 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi opened a new pull request #10741: [FLINK-15443][jdbc] Fix mismatch between Java float and JDBC float

2020-01-01 Thread GitBox
JingsongLi opened a new pull request #10741: [FLINK-15443][jdbc] Fix mismatch 
between Java float and JDBC float
URL: https://github.com/apache/flink/pull/10741
 
 
   ## What is the purpose of the change
   
   Bug when use JDBC sink with float type.
   
   ## Brief change log
   
   In flink 
   - SQL, we regard float as java float.
   - But in JDBC, real type is java float, float/double are java double.
   
   We have dealt with data very well in JDBCUtils, but mismatch in 
JDBCTypeUtil to match java float to JDBC float.
   
   
   ## Verifying this change
   
   `JDBCUpsertTableSinkITCase.testReal`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10737: [FLINK-15430][table-planner-blink] Fix Java 64K method compiling limi…

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10737: [FLINK-15430][table-planner-blink] 
Fix Java 64K method compiling limi…
URL: https://github.com/apache/flink/pull/10737#issuecomment-570066864
 
 
   
   ## CI report:
   
   * fdd885648d4585ef6a77fc71584aef144c99af64 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142804673) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4022)
 
   * 263bb0c191d7090cc1971a4973fc9153bfffc024 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142831085) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4029)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10740: [FLINK-15432][jdbc] Added flushInterval support for JDBCOutputFormat

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10740: [FLINK-15432][jdbc] Added 
flushInterval support for JDBCOutputFormat
URL: https://github.com/apache/flink/pull/10740#issuecomment-570113597
 
 
   
   ## CI report:
   
   * 0d69d547cbdb24d74d0a43653bdc43e7ee0c9c4e Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142833898) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10739: [FLINK-15452]fix error nested udf type missing

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10739: [FLINK-15452]fix error nested udf 
type missing
URL: https://github.com/apache/flink/pull/10739#issuecomment-570110426
 
 
   
   ## CI report:
   
   * c3944d647f614c097030526f8cb04fc6b04615b8 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142831092) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4030)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2020-01-01 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006596#comment-17006596
 ] 

Jark Wu commented on FLINK-15443:
-

1.11.0: 502364676cbe85bfddd05e7620e388bc2b901360
1.10.0: 7395000131135b8fcbcbf9e7045be4b7e4846b4e
1.9.2: TODO

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2020-01-01 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006595#comment-17006595
 ] 

Jark Wu commented on FLINK-15443:
-

[~lzljs3620320], could you open a pull request against release-1.9? 

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #10731: [FLINK-15443][jdbc] Fix mismatch between java float and jdbc float

2020-01-01 Thread GitBox
wuchong merged pull request #10731: [FLINK-15443][jdbc] Fix mismatch between 
java float and jdbc float
URL: https://github.com/apache/flink/pull/10731
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   * c95faa623e5e0ea94dcb82df536fbe37e724baac Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142639483) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3983)
 
   * b5f020618f6d894a816de3fc7c0c53af295e7ce2 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142742105) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4018)
 
   * bd3827ead5cdff406f0f0cceac56c7ff3c591be6 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/142833872) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4031)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10739: [FLINK-15452]fix error nested udf type missing

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10739: [FLINK-15452]fix error nested udf 
type missing
URL: https://github.com/apache/flink/pull/10739#issuecomment-570110426
 
 
   
   ## CI report:
   
   * c3944d647f614c097030526f8cb04fc6b04615b8 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142831092) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4030)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10737: [FLINK-15430][table-planner-blink] Fix Java 64K method compiling limi…

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10737: [FLINK-15430][table-planner-blink] 
Fix Java 64K method compiling limi…
URL: https://github.com/apache/flink/pull/10737#issuecomment-570066864
 
 
   
   ## CI report:
   
   * fdd885648d4585ef6a77fc71584aef144c99af64 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142804673) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4022)
 
   * 263bb0c191d7090cc1971a4973fc9153bfffc024 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142831085) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4029)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10735: [hotfix][runtime] Fix variable name typos in TaskInformation.java and…

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10735: [hotfix][runtime] Fix variable name 
typos in TaskInformation.java and…
URL: https://github.com/apache/flink/pull/10735#issuecomment-570028288
 
 
   
   ## CI report:
   
   * fa6bd8979b2404913c658fc035746f7b2ddbd3ae Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142788492) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4020)
 
   * 904feac2ab81384f6faf2d6e7b53c4c24b1f63bf Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142831077) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4028)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl docs

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl 
docs
URL: https://github.com/apache/flink/pull/10732#issuecomment-569896244
 
 
   
   ## CI report:
   
   * 6fd808adc772aa42c3ed9968bbd9593c28940539 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732194) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4014)
 
   * b5c952a2c63c55776a601d3ddf9d5b703784949e Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142835136) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] 
Cast string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569861765
 
 
   
   ## CI report:
   
   * 9bbb2830a6e6e185ae6a9d4a8d3e2b99c7648d9c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142717413) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3998)
 
   * 2068b01ca802c8c3a9b267aa951a14e2c55692a4 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142727413) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4008)
 
   * f587800f4516dd046a102bcdd4f9e2a70948d6c7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142829611) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4024)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] 
Fix prune partition on DATE/TIME/TIMESTAMP columns
URL: https://github.com/apache/flink/pull/10704#issuecomment-569239989
 
 
   
   ## CI report:
   
   * de210eacfb754ef4d169bbfb50877d3e03e8c792 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142444279) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3954)
 
   * 749a0addc128db847dcc13b4494148474b50bee2 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142609801) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3970)
 
   * 30cee33e9aa2601b3266871a7f30dda41f8dc0a4 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142720734) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4000)
 
   * 1510d279b5317a3d968dd4245b90975c1269d30f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142727399) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4007)
 
   * cf5d3c1bafdb05c55e91a71ec662ce2bee513177 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142831069) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4027)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15259) HiveInspector.toInspectors() should convert Flink constant to Hive constant

2020-01-01 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006586#comment-17006586
 ] 

Bowen Li commented on FLINK-15259:
--

[~lirui] there's conflict when merging in 1.9. if we want to backport to 1.9, 
can you open a separate hotfix pr?

> HiveInspector.toInspectors() should convert Flink constant to Hive constant 
> 
>
> Key: FLINK-15259
> URL: https://issues.apache.org/jira/browse/FLINK-15259
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> repro test: 
> {code:java}
> public class HiveModuleITCase {
>   @Test
>   public void test() {
>   TableEnvironment tEnv = 
> HiveTestUtils.createTableEnvWithBlinkPlannerBatchMode();
>   tEnv.unloadModule("core");
>   tEnv.loadModule("hive", new HiveModule("2.3.4"));
>   tEnv.sqlQuery("select concat('an', 'bn')");
>   }
> }
> {code}
> seems that currently HiveInspector.toInspectors() didn't convert Flink 
> constant to Hive constant before calling 
> hiveShim.getObjectInspectorForConstant
> I don't think it's a blocker



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15259) HiveInspector.toInspectors() should convert Flink constant to Hive constant

2020-01-01 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15259.

Fix Version/s: (was: 1.9.2)
   Resolution: Fixed

master: 6f3a0778ba352e76c96160c3125e4e0ee88addba
1.10: f8b6bed54e21ce3a51f9d63389e5a037e89f9ca8

> HiveInspector.toInspectors() should convert Flink constant to Hive constant 
> 
>
> Key: FLINK-15259
> URL: https://issues.apache.org/jira/browse/FLINK-15259
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Bowen Li
>Assignee: Rui Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> repro test: 
> {code:java}
> public class HiveModuleITCase {
>   @Test
>   public void test() {
>   TableEnvironment tEnv = 
> HiveTestUtils.createTableEnvWithBlinkPlannerBatchMode();
>   tEnv.unloadModule("core");
>   tEnv.loadModule("hive", new HiveModule("2.3.4"));
>   tEnv.sqlQuery("select concat('an', 'bn')");
>   }
> }
> {code}
> seems that currently HiveInspector.toInspectors() didn't convert Flink 
> constant to Hive constant before calling 
> hiveShim.getObjectInspectorForConstant
> I don't think it's a blocker



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2020-01-01 Thread GitBox
bowenli86 closed pull request #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2020-01-01 Thread GitBox
qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362369888
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
+   StreamExecutionEnvironment env = 
MLEnvironmentFactory.getDefault().getStreamExecutionEnvironment();
+
+   DataStream input = env.fromElements(Row.of("a"));
+
+   Table table1 = 
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"word"});
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(String.class)}),
+   table1.getSchema()
+   );
+
+   input = input.map(new GenericTypeMap());
+
+   Table table2 = DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
+   new String[]{"word"},
+   new TypeInformation[]{TypeInformation.of(Integer.class)}
+   );
+
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(Integer.class)}),
+   table2.getSchema()
+   );
+
+   Table tableFromDataStreamWithTableSchema = 
DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
+   new TableSchema(
+   new String[]{"word"},
+   new 
TypeInformation[]{TypeInformation.of(Integer.class)}
+   )
+   );
+
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(Integer.class)}),
+   tableFromDataStreamWithTableSchema.getSchema()
+   );
+
+   thrown.expect(ValidationException.class);
+   
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"f0"});
 
 Review comment:
   Sorry for the misleading, `"f0"` should be `"word"`. The same exception will 
raise because type information of the column is not specified.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-15290) Need a way to turn off vectorized orc reader for SQL CLI

2020-01-01 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15290.

Fix Version/s: 1.11.0
   Resolution: Fixed

master: f205d754ad513a8e16dd2b9d5fa4586149ac7013
1.10: 694ae9c9c943a8f8cc0aff5aa54ce7207eb3da21

> Need a way to turn off vectorized orc reader for SQL CLI
> 
>
> Key: FLINK-15290
> URL: https://issues.apache.org/jira/browse/FLINK-15290
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bowenli86 closed pull request #10632: [FLINK-15290][hive] Need a way to turn off vectorized orc reader for SQL CLI

2020-01-01 Thread GitBox
bowenli86 closed pull request #10632: [FLINK-15290][hive] Need a way to turn 
off vectorized orc reader for SQL CLI
URL: https://github.com/apache/flink/pull/10632
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2020-01-01 Thread GitBox
qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362369416
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
+   StreamExecutionEnvironment env = 
MLEnvironmentFactory.getDefault().getStreamExecutionEnvironment();
+
+   DataStream input = env.fromElements(Row.of("a"));
+
+   Table table1 = 
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"word"});
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(String.class)}),
+   table1.getSchema()
+   );
+
+   input = input.map(new GenericTypeMap());
+
+   Table table2 = DataStreamConversionUtil.toTable(
+   MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
+   input,
+   new String[]{"word"},
+   new TypeInformation[]{TypeInformation.of(Integer.class)}
 
 Review comment:
   This is intended, just another different type from the original one to make 
the following assert explicit. It throws no exception since it won't be 
executed.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10740: [FLINK-15432][jdbc] Added flushInterval support for JDBCOutputFormat

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10740: [FLINK-15432][jdbc] Added 
flushInterval support for JDBCOutputFormat
URL: https://github.com/apache/flink/pull/10740#issuecomment-570113597
 
 
   
   ## CI report:
   
   * 0d69d547cbdb24d74d0a43653bdc43e7ee0c9c4e Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142833898) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl docs

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10732: [FLINK-14980][docs] add function ddl 
docs
URL: https://github.com/apache/flink/pull/10732#issuecomment-569896244
 
 
   
   ## CI report:
   
   * 6fd808adc772aa42c3ed9968bbd9593c28940539 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142732194) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4014)
 
   * b5c952a2c63c55776a601d3ddf9d5b703784949e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10731: [FLINK-15443][jdbc] Fix mismatch between java float and jdbc float

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10731: [FLINK-15443][jdbc] Fix mismatch 
between java float and jdbc float
URL: https://github.com/apache/flink/pull/10731#issuecomment-569887118
 
 
   
   ## CI report:
   
   * 6b0c6b1481b1b022f91a6318d77f32d0632eb1b3 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142728836) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4009)
 
   * f17564fef3d1fcd3277e59be69ec74436757d6f8 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142829621) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4025)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10736: [FLINK-15010][Network] Add shutdown hook to ensure cleanup netty shuffle directories

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10736: [FLINK-15010][Network] Add shutdown 
hook to ensure cleanup netty shuffle directories
URL: https://github.com/apache/flink/pull/10736#issuecomment-570066854
 
 
   
   ## CI report:
   
   * 4b605068e32d3eb15a51f838ed29918d1224959a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142804653) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4021)
 
   * 9430066683a67318f9685de8a58904972c5dbaca Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142829633) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4026)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] 
Cast string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569861765
 
 
   
   ## CI report:
   
   * 9bbb2830a6e6e185ae6a9d4a8d3e2b99c7648d9c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142717413) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3998)
 
   * 2068b01ca802c8c3a9b267aa951a14e2c55692a4 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142727413) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4008)
 
   * f587800f4516dd046a102bcdd4f9e2a70948d6c7 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142829611) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4024)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2020-01-01 Thread GitBox
qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362369069
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/test/java/org/apache/flink/ml/common/utils/DataStreamConversionUtilTest.java
 ##
 @@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+import org.junit.Assert;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+/**
+ * Unit Test for DataStreamConversionUtil.
+ */
+public class DataStreamConversionUtilTest {
+   @Rule
+   public ExpectedException thrown = ExpectedException.none();
+
+   @Test
+   public void test() throws Exception {
+   StreamExecutionEnvironment env = 
MLEnvironmentFactory.getDefault().getStreamExecutionEnvironment();
+
+   DataStream input = env.fromElements(Row.of("a"));
+
+   Table table1 = 
DataStreamConversionUtil.toTable(MLEnvironmentFactory.DEFAULT_ML_ENVIRONMENT_ID,
 input, new String[]{"word"});
+   Assert.assertEquals(
+   new TableSchema(new String[]{"word"}, new 
TypeInformation[]{TypeInformation.of(String.class)}),
+   table1.getSchema()
+   );
+
+   input = input.map(new GenericTypeMap());
 
 Review comment:
   Yes, this removes type information on the original datastream.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2020-01-01 Thread GitBox
qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362369002
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/DataSetConversionUtil.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.operators.SingleInputUdfOperator;
+import org.apache.flink.api.java.operators.TwoInputUdfOperator;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.common.MLEnvironment;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+/**
+ * Provide functions of conversions between DataSet and Table.
+ */
+public class DataSetConversionUtil {
+   /**
+* Convert the given Table to {@link DataSet}<{@link Row}>.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param table the Table to convert.
+* @return the converted DataSet.
+*/
+   public static DataSet  fromTable(Long sessionId, Table table) {
+   return MLEnvironmentFactory
+   .get(sessionId)
+   .getBatchTableEnvironment()
+   .toDataSet(table, Row.class);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified TableSchema.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param data   the DataSet to convert.
+* @param schema the specified TableSchema.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
TableSchema schema) {
+   return toTable(sessionId, data, schema.getFieldNames(), 
schema.getFieldTypes());
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames, TypeInformation [] colTypes) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames, colTypes);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param session the MLEnvironment using to convert DataSet to Table.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public 

[GitHub] [flink] flinkbot edited a comment on issue #10694: [FLINK-15381] [table-planner-blink] correct collation derive logic on RelSubset in RelMdCollation

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10694: [FLINK-15381] [table-planner-blink] 
correct collation derive logic on RelSubset in RelMdCollation
URL: https://github.com/apache/flink/pull/10694#issuecomment-569001563
 
 
   
   ## CI report:
   
   * e9b4471c51446e144d7071480515be858959a352 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142355379) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3925)
 
   * a37a0f82b4661edbaff69f873ef7c9a4cd460ed9 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142833888) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4032)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2020-01-01 Thread GitBox
flinkbot edited a comment on issue #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#issuecomment-567374884
 
 
   
   ## CI report:
   
   * b200938f6abf4bac5e929db791c52289d110054e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/141714552) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3751)
 
   * de8b409c144285a3d75874e4863a2d1f7fc4336a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/141863465) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3797)
 
   * d021a016adf64118b785642ffbe1fbe45cb9e15a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142456762) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3957)
 
   * c95faa623e5e0ea94dcb82df536fbe37e724baac Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142639483) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3983)
 
   * b5f020618f6d894a816de3fc7c0c53af295e7ce2 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142742105) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4018)
 
   * bd3827ead5cdff406f0f0cceac56c7ff3c591be6 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142833872) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4031)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15381) INSERT INTO VALUES statement fails if a cast project is applied

2020-01-01 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006579#comment-17006579
 ] 

godfrey he commented on FLINK-15381:


{{RelMdCollation}} will derive collation for `Values`. The derivation result of 
`Values ((1, 9, true, 2), (2, 6, false, 3), (3, 3, true, 4))` is `(0, 1, 2, 3)` 
and `(3)`

> INSERT INTO VALUES statement fails if a cast project is applied
> ---
>
> Key: FLINK-15381
> URL: https://issues.apache.org/jira/browse/FLINK-15381
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
> Attachments: image-2019-12-26-14-56-00-634.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following query will fail:
> {code:scala}
>   @Test
>   def test(): Unit = {
> val sinkDDL =
>   """
> |create table t2(
> |  a int,
> |  b string
> |) with (
> |  'connector' = 'COLLECTION'
> |)
>   """.stripMargin
> val query =
>   """
> |insert into t2 select cast(a as int), cast(b as varchar) from 
> (values (3, 'c')) T(a,b)
>   """.stripMargin
> tableEnv.sqlUpdate(sinkDDL)
> tableEnv.sqlUpdate(query)
> execJob("testJob")
>   }
> {code}
> exception:
> {code}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> LogicalSink(name=[`default_catalog`.`default_database`.`t2`], fields=[a, b])
> +- LogicalProject(EXPR$0=[$0], EXPR$1=[CAST($1):VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" NOT NULL])
>+- LogicalValues(type=[RecordType(INTEGER a, CHAR(1) b)], tuples=[[{ 3, 
> _UTF-16LE'c' }]])
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL 
> features.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15381) INSERT INTO VALUES statement fails if a cast project is applied

2020-01-01 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17006579#comment-17006579
 ] 

godfrey he edited comment on FLINK-15381 at 1/2/20 4:21 AM:


{{RelMdCollation}} will derive collation for `Values`. The derivation result of 
{{Values ((1, 9, true, 2), (2, 6, false, 3), (3, 3, true, 4))}} is {{(0, 1, 2, 
3)}} and {{(3)}}


was (Author: godfreyhe):
{{RelMdCollation}} will derive collation for `Values`. The derivation result of 
`Values ((1, 9, true, 2), (2, 6, false, 3), (3, 3, true, 4))` is `(0, 1, 2, 3)` 
and `(3)`

> INSERT INTO VALUES statement fails if a cast project is applied
> ---
>
> Key: FLINK-15381
> URL: https://issues.apache.org/jira/browse/FLINK-15381
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: godfrey he
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.10.0
>
> Attachments: image-2019-12-26-14-56-00-634.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following query will fail:
> {code:scala}
>   @Test
>   def test(): Unit = {
> val sinkDDL =
>   """
> |create table t2(
> |  a int,
> |  b string
> |) with (
> |  'connector' = 'COLLECTION'
> |)
>   """.stripMargin
> val query =
>   """
> |insert into t2 select cast(a as int), cast(b as varchar) from 
> (values (3, 'c')) T(a,b)
>   """.stripMargin
> tableEnv.sqlUpdate(sinkDDL)
> tableEnv.sqlUpdate(query)
> execJob("testJob")
>   }
> {code}
> exception:
> {code}
> org.apache.flink.table.api.TableException: Cannot generate a valid execution 
> plan for the given query: 
> LogicalSink(name=[`default_catalog`.`default_database`.`t2`], fields=[a, b])
> +- LogicalProject(EXPR$0=[$0], EXPR$1=[CAST($1):VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE" NOT NULL])
>+- LogicalValues(type=[RecordType(INTEGER a, CHAR(1) b)], tuples=[[{ 3, 
> _UTF-16LE'c' }]])
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL 
> features.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362367671
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -1116,68 +,68 @@ The start and end timestamps of group windows as well 
as time attributes can be
 HOP_START(time_attr, interval, interval)
 SESSION_START(time_attr, interval)
   
-  Returns the timestamp of the inclusive lower bound of the 
corresponding tumbling, hopping, or session window.
+  返回相对应的滚动,滑动和会话窗口范围内的下界时间戳。
 
 
   
 TUMBLE_END(time_attr, interval)
 HOP_END(time_attr, interval, interval)
 SESSION_END(time_attr, interval)
   
-  Returns the timestamp of the exclusive upper bound of the 
corresponding tumbling, hopping, or session window.
-Note: The exclusive upper bound timestamp cannot be 
used as a rowtime attribute in 
subsequent time-based operations, such as time-windowed 
joins and group window or over window 
aggregations.
+  返回相对应的滚动,滑动和会话窗口范围以外的上界时间戳。
+注意: 范围以外的上界时间戳不可以 在随后基于时间的操作中,作为 行时间属性 使用,比如 基于时间窗口的 join  以及 分组窗口或分组窗口上的聚合。
 
 
   
 TUMBLE_ROWTIME(time_attr, interval)
 HOP_ROWTIME(time_attr, interval, interval)
 SESSION_ROWTIME(time_attr, interval)
   
-  Returns the timestamp of the inclusive upper bound of the 
corresponding tumbling, hopping, or session window.
-  The resulting attribute is a rowtime attribute that can be 
used in subsequent time-based operations such as time-windowed 
joins and group window or over window 
aggregations.
+  返回相对应的滚动,滑动和会话窗口范围以内的上界时间戳。
 
 Review comment:
   ```suggestion
 返回相对应的滚动、滑动和会话窗口范围以内的上界时间戳。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362364060
 
 

 ##
 File path: docs/dev/table/sql/create.zh.md
 ##
 @@ -141,44 +141,44 @@ CREATE TABLE [catalog_name.][db_name.]table_name
 
 {% endhighlight %}
 
-Creates a table with the given name. If a table with the same name already 
exists in the catalog, an exception is thrown.
+根据给定的表属性创建表。若数据库中已存在同名表会抛出异常。
 
 **COMPUTED COLUMN**
 
-A computed column is a virtual column that is generated using the syntax  
"`column_name AS computed_column_expression`". It is generated from a non-query 
expression that uses other columns in the same table and is not physically 
stored within the table. For example, a computed column could be defined as 
`cost AS price * quantity`. The expression may contain any combination of 
physical column, constant, function, or variable. The expression cannot contain 
a subquery.
+计算列是一个使用 “`column_name AS computed_column_expression`” 
语法生成的虚拟列。它由使用同一表中其他列的非查询表达式生成,并且不会在表中进行物理存储。例如,一个计算列可以使用 `cost AS price * 
quantity` 进行定义,这个表达式可以包含物理列、常量、函数或变量的任意组合,但这个表达式不能存在任何子查询。
 
-Computed columns are commonly used in Flink for defining [time attributes]({{ 
site.baseurl}}/dev/table/streaming/time_attributes.html) in CREATE TABLE 
statements.
-A [processing time attribute]({{ 
site.baseurl}}/dev/table/streaming/time_attributes.html#processing-time) can be 
defined easily via `proc AS PROCTIME()` using the system `PROCTIME()` function.
-On the other hand, computed column can be used to derive event time column 
because an event time column may need to be derived from existing fields, e.g. 
the original field is not `TIMESTAMP(3)` type or is nested in a JSON string.
+在 Flink 中计算列一般用于为 CREATE TABLE 语句定义 [时间属性]({{ 
site.baseurl}}/zh/dev/table/streaming/time_attributes.html)。
+[处理时间属性]({{ 
site.baseurl}}/zh/dev/table/streaming/time_attributes.html#processing-time) 
可以简单地通过使用了系统函数 `PROCTIME()` 的 `proc AS PROCTIME()` 语句进行定义。
+另一方面,由于事件时间列可能需要从现有的字段中获得,因此计算列可用于获得事件时间列。例如,原始字段的类型不是 `TIMESTAMP(3)` 或嵌套在 
JSON 字符串中。
 
-Notes:
+注意:
 
-- A computed column defined on a source table is computed after reading from 
the source, it can be used in the following SELECT query statements.
-- A computed column cannot be the target of an INSERT statement. In INSERT 
statements, the schema of SELECT clause should match the schema of the target 
table without computed columns.
+- 定义在一个数据源表( source table )上的计算列会在从数据源读取数据后被计算,它们可以在 SELECT 查询语句后使用。
 
 Review comment:
   ```suggestion
   - 定义在一个数据源表( source table )上的计算列会在从数据源读取数据后被计算,它们可以在 SELECT 查询语句中使用。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362367270
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -925,26 +923,26 @@ val result1 = tableEnv.sqlQuery(
 
 
 
- No Ranking Output Optimization
+ 不进行排序输出的优化
 
-As described above, the `rownum` field will be written into the result table 
as one field of the unique key, which may lead to a lot of records being 
written to the result table. For example, when the record (say `product-1001`) 
of ranking 9 is updated and its rank is upgraded to 1, all the records from 
ranking 1 ~ 9 will be output to the result table as update messages. If the 
result table receives too many data, it will become the bottleneck of the SQL 
job.
+如上文所描述,`rownum` 字段会作为唯一键的其中一个字段写到结果表里面,这会导致大量的结构写出到结果表。比如,当原始结果(名为 
`product-1001` )从排序第九变化为排序第一时,排名 1-9 的所有结果都会以更新消息的形式发送到结果表。若结果表收到太多的数据,将会成为 SQL 
任务的瓶颈。
 
 Review comment:
   ```suggestion
   如上文所描述,`rownum` 字段会作为唯一键的其中一个字段写到结果表里面,这会导致大量的结果写出到结果表。比如,当原始结果(名为 
`product-1001` )从排序第九变化为排序第一时,排名 1-9 的所有结果都会以更新消息的形式发送到结果表。若结果表收到太多的数据,将会成为 SQL 
任务的瓶颈。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362367142
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -863,33 +861,33 @@ FROM (
 WHERE rownum <= N [AND conditions]
 {% endhighlight %}
 
-**Parameter Specification:**
+**参数说明:**
 
-- `ROW_NUMBER()`: Assigns an unique, sequential number to each row, starting 
with one, according to the ordering of rows within the partition. Currently, we 
only support `ROW_NUMBER` as the over window function. In the future, we will 
support `RANK()` and `DENSE_RANK()`.
-- `PARTITION BY col1[, col2...]`: Specifies the partition columns. Each 
partition will have a Top-N result.
-- `ORDER BY col1 [asc|desc][, col2 [asc|desc]...]`: Specifies the ordering 
columns. The ordering directions can be different on different columns.
-- `WHERE rownum <= N`: The `rownum <= N` is required for Flink to recognize 
this query is a Top-N query. The N represents the N smallest or largest records 
will be retained.
-- `[AND conditions]`: It is free to add other conditions in the where clause, 
but the other conditions can only be combined with `rownum <= N` using `AND` 
conjunction.
+- `ROW_NUMBER()`: 根据当前分区内的各行的顺序从第一行开始,依次为每一行分配一个唯一且连续的号码。目前,我们只支持 `ROW_NUMBER` 
在 over 窗口函数中使用。未来将会支持 `RANK()` 和 `DENSE_RANK()`函数。
+- `PARTITION BY col1[, col2...]`: 指定分区列,每个分区都将会有一个 Top-N 结果。
+- `ORDER BY col1 [asc|desc][, col2 [asc|desc]...]`: 指定排序列,不同列的排序方式可以不一样。
 
 Review comment:
   ```suggestion
   - `ORDER BY col1 [asc|desc][, col2 [asc|desc]...]`: 指定排序列,不同列的排序方向可以不一样。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362367954
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -587,61 +587,61 @@ WHERE o.id = s.orderId AND
   o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-The example above will join all orders with their corresponding shipments if 
the order was shipped four hours after the order was received.
+以上示例中,所有在收到后四小时内发货的 order 会与他们相关的 shipment 进行 join。
   
 
 
-   
+  
 Expanding arrays into a relation
-Batch Streaming
+批处理 流处理
   
-   
-Unnesting WITH ORDINALITY is not supported yet.
+  
+目前尚未支持非嵌套的 WITH ORDINALITY 。
 {% highlight sql %}
 SELECT users, tag
 FROM Orders CROSS JOIN UNNEST(tags) AS t (tag)
 {% endhighlight %}
   
 
 
-   
-Join with Table Function (UDTF)
-Batch Streaming
+  
+Join 表函数 (UDTF)
+批处理 流处理
   
-   
-Joins a table with the results of a table function. Each row of the 
left (outer) table is joined with all rows produced by the corresponding call 
of the table function.
-User-defined table functions (UDTFs) must be registered before. See 
the UDF 
documentation for details on how to specify and register UDTFs. 
+  
+将表与表函数的结果进行 join 操作。左表(outer)中的每一行将会与调用表函数所产生的所有结果中相关联行进行 join 。
+用户自定义表函数( User-defined table functions,UDTFs ) 在执行前必须先注册。请参考 UDF 文档 
以获取更多关于指定和注册UDF的信息 
 
 Inner Join
-A row of the left (outer) table is dropped, if its table function 
call returns an empty result.
+若表函数返回了空结果,左表(outer)的行将会被删除。
 {% highlight sql %}
 SELECT users, tag
 FROM Orders, LATERAL TABLE(unnest_udtf(tags)) t AS tag
 {% endhighlight %}
 
 Left Outer Join
-If a table function call returns an empty result, the corresponding 
outer row is preserved and the result padded with null values.
+若表函数返回了空结果,将会保留相对应的外部行并用空值填充结果。
 {% highlight sql %}
 SELECT users, tag
 FROM Orders LEFT JOIN LATERAL TABLE(unnest_udtf(tags)) t AS tag ON TRUE
 {% endhighlight %}
 
-Note: Currently, only literal TRUE is supported 
as predicate for a left outer join against a lateral table.
+注意: 当前仅支持文本常量 TRUE 作为针对横向表的左外部联接的谓词。
   
 
 
   
-Join with Temporal Table Function
-Streaming
+Join Temporal Tables 函数
 
 Review comment:
   ```suggestion
   Join Temporal Table Function
   ```
   
   这段还是不翻译吧,描述中的也对应修改一下。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362366207
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -106,7 +106,7 @@ tableEnv.connect(new FileSystem("/path/to/file"))
 .withSchema(schema)
 .createTemporaryTable("RubberOrders")
 
-// run a SQL update query on the Table and emit the result to the TableSink
+// 在表上执行 SQL 更新操纵,并把结果发出到 TableSink
 
 Review comment:
   ```suggestion
   // 在表上执行 SQL 更新操作,并把结果发出到 TableSink
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362363880
 
 

 ##
 File path: docs/dev/table/sql/create.zh.md
 ##
 @@ -141,44 +141,44 @@ CREATE TABLE [catalog_name.][db_name.]table_name
 
 {% endhighlight %}
 
-Creates a table with the given name. If a table with the same name already 
exists in the catalog, an exception is thrown.
+根据给定的表属性创建表。若数据库中已存在同名表会抛出异常。
 
 Review comment:
   根据指定的表名创建一个表,如果同名表已经在 catalog 中存在了,则无法注册。


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362367444
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -1236,25 +1231,25 @@ val result4 = tableEnv.sqlQuery(
 
 {% top %}
 
-### Pattern Recognition
+## 模式匹配
 
 Review comment:
   ```suggestion
   ### 模式匹配
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362364476
 
 

 ##
 File path: docs/dev/table/sql/create.zh.md
 ##
 @@ -141,44 +141,44 @@ CREATE TABLE [catalog_name.][db_name.]table_name
 
 {% endhighlight %}
 
-Creates a table with the given name. If a table with the same name already 
exists in the catalog, an exception is thrown.
+根据给定的表属性创建表。若数据库中已存在同名表会抛出异常。
 
 **COMPUTED COLUMN**
 
-A computed column is a virtual column that is generated using the syntax  
"`column_name AS computed_column_expression`". It is generated from a non-query 
expression that uses other columns in the same table and is not physically 
stored within the table. For example, a computed column could be defined as 
`cost AS price * quantity`. The expression may contain any combination of 
physical column, constant, function, or variable. The expression cannot contain 
a subquery.
+计算列是一个使用 “`column_name AS computed_column_expression`” 
语法生成的虚拟列。它由使用同一表中其他列的非查询表达式生成,并且不会在表中进行物理存储。例如,一个计算列可以使用 `cost AS price * 
quantity` 进行定义,这个表达式可以包含物理列、常量、函数或变量的任意组合,但这个表达式不能存在任何子查询。
 
-Computed columns are commonly used in Flink for defining [time attributes]({{ 
site.baseurl}}/dev/table/streaming/time_attributes.html) in CREATE TABLE 
statements.
-A [processing time attribute]({{ 
site.baseurl}}/dev/table/streaming/time_attributes.html#processing-time) can be 
defined easily via `proc AS PROCTIME()` using the system `PROCTIME()` function.
-On the other hand, computed column can be used to derive event time column 
because an event time column may need to be derived from existing fields, e.g. 
the original field is not `TIMESTAMP(3)` type or is nested in a JSON string.
+在 Flink 中计算列一般用于为 CREATE TABLE 语句定义 [时间属性]({{ 
site.baseurl}}/zh/dev/table/streaming/time_attributes.html)。
+[处理时间属性]({{ 
site.baseurl}}/zh/dev/table/streaming/time_attributes.html#processing-time) 
可以简单地通过使用了系统函数 `PROCTIME()` 的 `proc AS PROCTIME()` 语句进行定义。
+另一方面,由于事件时间列可能需要从现有的字段中获得,因此计算列可用于获得事件时间列。例如,原始字段的类型不是 `TIMESTAMP(3)` 或嵌套在 
JSON 字符串中。
 
-Notes:
+注意:
 
-- A computed column defined on a source table is computed after reading from 
the source, it can be used in the following SELECT query statements.
-- A computed column cannot be the target of an INSERT statement. In INSERT 
statements, the schema of SELECT clause should match the schema of the target 
table without computed columns.
+- 定义在一个数据源表( source table )上的计算列会在从数据源读取数据后被计算,它们可以在 SELECT 查询语句后使用。
+- 计算列不可以作为 INSERT 语句的目标,在 INSERT 语句中,SELECT 语句的结构( schema )需要与目标表不带有计算列的结构一致。
 
 **WATERMARK**
 
-The `WATERMARK` defines the event time attributes of a table and takes the 
form `WATERMARK FOR rowtime_column_name  AS watermark_strategy_expression`.
+`WATERMARK` 定义了表的事件时间属性,其形式为 `WATERMARK FOR rowtime_column_name  AS 
watermark_strategy_expression` 。
 
-The  `rowtime_column_name` defines an existing column that is marked as the 
event time attribute of the table. The column must be of type `TIMESTAMP(3)` 
and be a top-level column in the schema. It may be a computed column.
+`rowtime_column_name` 把一个现有的列定义为一个为表标记事件时间的属性。该列的类型必须为 
`TIMESTAMP(3)`,且是结构中最高级别的列,它也可以是一个计算列。
 
-The `watermark_strategy_expression` defines the watermark generation strategy. 
It allows arbitrary non-query expression, including computed columns, to 
calculate the watermark. The expression return type must be TIMESTAMP(3), which 
represents the timestamp since the Epoch.
+`watermark_strategy_expression` 定义了 watermark 的生成策略。它允许使用包括计算列在内的任意非查询表达式来计算 
watermark ;表达式的返回类型必须是表示了从起始点( Epoch )以来的时间戳的 TIMESTAMP(3) 。
 
-When using event time semantics, tables must contain an event time attribute 
and watermarking strategy.
+使用事件时间语义时,表必须包含事件时间属性和 watermark 策略。
 
-Flink provides several commonly used watermark strategies.
+Flink 提供了几种常用的 watermark 策略。
 
-- Strictly ascending timestamps: `WATERMARK FOR rowtime_column AS 
rowtime_column`.
+- 严格递增时间戳: `WATERMARK FOR rowtime_column AS rowtime_column`。
 
-  Emits a watermark of the maximum observed timestamp so far. Rows that have a 
timestamp smaller to the max timestamp are not late.
+  发出到目前为止已观察到的最大时间戳的 watermark ,时间戳小于最大时间戳的行不会迟到。
 
-- Ascending timestamps: `WATERMARK FOR rowtime_column AS rowtime_column - 
INTERVAL '0.001' SECOND`.
+- 递增时间戳: `WATERMARK FOR rowtime_column AS rowtime_column - INTERVAL '0.001' 
SECOND`。
 
-  Emits a watermark of the maximum observed timestamp so far minus 1. Rows 
that have a timestamp equal to the max timestamp are not late.
+  发出到目前为止已观察到的最大时间戳减 1 的 watermark ,时间戳等于最大时间戳的行不会迟到。
 
 Review comment:
   ```suggestion
 发出到目前为止已观察到的最大时间戳减 1 的 watermark ,时间戳等于或小于最大时间戳的行被认为没有迟到。
   ```
   
   原文少了 `equal and smaller to the max..`, 请帮忙修改下英文原文。 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the

[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362367667
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -1116,68 +,68 @@ The start and end timestamps of group windows as well 
as time attributes can be
 HOP_START(time_attr, interval, interval)
 SESSION_START(time_attr, interval)
   
-  Returns the timestamp of the inclusive lower bound of the 
corresponding tumbling, hopping, or session window.
+  返回相对应的滚动,滑动和会话窗口范围内的下界时间戳。
 
 
   
 TUMBLE_END(time_attr, interval)
 HOP_END(time_attr, interval, interval)
 SESSION_END(time_attr, interval)
   
-  Returns the timestamp of the exclusive upper bound of the 
corresponding tumbling, hopping, or session window.
-Note: The exclusive upper bound timestamp cannot be 
used as a rowtime attribute in 
subsequent time-based operations, such as time-windowed 
joins and group window or over window 
aggregations.
+  返回相对应的滚动,滑动和会话窗口范围以外的上界时间戳。
 
 Review comment:
   ```suggestion
 返回相对应的滚动、滑动和会话窗口范围以外的上界时间戳。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10738: [FLINK-15385][table][docs] Translate SQL section of Table API into Chinese

2020-01-01 Thread GitBox
wuchong commented on a change in pull request #10738: 
[FLINK-15385][table][docs] Translate SQL section of Table API into Chinese
URL: https://github.com/apache/flink/pull/10738#discussion_r362367656
 
 

 ##
 File path: docs/dev/table/sql/queries.zh.md
 ##
 @@ -1116,68 +,68 @@ The start and end timestamps of group windows as well 
as time attributes can be
 HOP_START(time_attr, interval, interval)
 SESSION_START(time_attr, interval)
   
-  Returns the timestamp of the inclusive lower bound of the 
corresponding tumbling, hopping, or session window.
+  返回相对应的滚动,滑动和会话窗口范围内的下界时间戳。
 
 Review comment:
   ```suggestion
 返回相对应的滚动、滑动和会话窗口范围内的下界时间戳。
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   >