[GitHub] [flink] docete commented on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
docete commented on issue #10727: [FLINK-15420][table-planner-blink] Cast 
string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569884462
 
 
   Most of JDK classes and 3rd-party library produce `1999-09-10` instead of 
`1999-9-10` for the timestamp string, e.g. java.sql.Timestamp.toString, 
java.text.SimpleDateFormat.format, etc. I think the `1999-9-10` is something 
rarely.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15430) Fix Java 64K method compiling limitation for blink planner.

2019-12-30 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005981#comment-17005981
 ] 

Jingsong Lee commented on FLINK-15430:
--

CC: [~ykt836]

> Fix Java 64K method compiling limitation for blink planner.
> ---
>
> Key: FLINK-15430
> URL: https://issues.apache.org/jira/browse/FLINK-15430
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Benchao Li
>Priority: Major
>
> Our Flink SQL version is migrated from 1.5 to 1.9, and from legacy planner to 
> blink planner. We find that some large SQL meets the problem of code gen 
> which exceeds Java 64k method limitation.
> After searching in issues, we find 
> https://issues.apache.org/jira/browse/FLINK-8274 which fix the bug to some 
> extent. But for blink planner, it has not been fixed for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15428) Avro Confluent Schema Registry nightly end-to-end test fails on travis

2019-12-30 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved FLINK-15428.
--
Resolution: Fixed

Merged.

Master: 0c0dc79548fb4414e8515517a03158a416808705

release-1.10: ed56d66c0597064bd77d1d2183cb221ff01c2da9

> Avro Confluent Schema Registry nightly end-to-end test fails on travis
> --
>
> Key: FLINK-15428
> URL: https://issues.apache.org/jira/browse/FLINK-15428
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Yangze Guo
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Avro Confluent Schema Registry nightly end-to-end test fails with below error:
> {code}
> Could not start confluent schema registry
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/kafka-common.sh:
>  line 78: ./bin/kafka-server-stop: No such file or directory
> No zookeeper server to stop
> Tried to kill 1549 but never saw it die
> [FAIL] Test script contains errors.
> {code}
> https://api.travis-ci.org/v3/job/629699437/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10731: [FLINK-15443][jdbc] Fix mismatch between java float and jdbc float

2019-12-30 Thread GitBox
flinkbot commented on issue #10731: [FLINK-15443][jdbc] Fix mismatch between 
java float and jdbc float
URL: https://github.com/apache/flink/pull/10731#issuecomment-569883207
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6b0c6b1481b1b022f91a6318d77f32d0632eb1b3 (Tue Dec 31 
07:49:31 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command 
for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569883198
 
 
   merged to both master and release 1.10.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] becketqin closed pull request #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
becketqin closed pull request #10720: [FLINK-15428][e2e] Fix the error command 
for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile
URL: https://github.com/apache/flink/pull/10720
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10730: [FLINK-14802][orc][hive] Multi 
vectorized read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730#issuecomment-569879082
 
 
   
   ## CI report:
   
   * 753a9d8bd5705954a67133f2780617ac936a8737 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142725335) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4004)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10729: [hotfix][runtime] Cleanup some checkpoint related codes

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10729: [hotfix][runtime] Cleanup some 
checkpoint related codes
URL: https://github.com/apache/flink/pull/10729#issuecomment-569879066
 
 
   
   ## CI report:
   
   * e0673933a498a537f2144e268eaa44d5c98c7f19 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142725323) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4003)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2019-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15443:
---
Labels: pull-request-available  (was: )

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #10731: [FLINK-15443][jdbc] Fix mismatch between java float and jdbc float

2019-12-30 Thread GitBox
JingsongLi opened a new pull request #10731: [FLINK-15443][jdbc] Fix mismatch 
between java float and jdbc float
URL: https://github.com/apache/flink/pull/10731
 
 
   
   ## What is the purpose of the change
   
   Bug when use JDBC sink with float type.
   
   ## Brief change log
   
   In flink 
   - SQL, we regard float as java float.
   - But in JDBC, real type is java float, float/double are java double.
   
   We have dealt with data very well in JDBCUtils, but mismatch in 
JDBCTypeUtil to match java float to JDBC float.
   
   
   ## Verifying this change
   
   `JDBCUpsertTableSinkITCase.testReal`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10728: [FLINK-15437][yarn] Apply dynamic properties early on client side.

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10728: [FLINK-15437][yarn] Apply dynamic 
properties early on client side.
URL: https://github.com/apache/flink/pull/10728#issuecomment-569870948
 
 
   
   ## CI report:
   
   * 4bae53183e1268380abdb6d6ad1f9c8b48b32d83 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142720750) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4002)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error 
command for stopping kafka cluster and exclude kafka 1.10 related test under 
JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569611106
 
 
   
   ## CI report:
   
   * 4c2fa97968a87d2db180d51eac1d6169cb851137 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142622684) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3978)
 
   * a526d5c351382445b26ac28e5fad85cf12697cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142624580) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3979)
 
   * bbb3dc2587b5aae17bc50588d19ca74d7def3e1f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142641831) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3984)
 
   * a91b3928e4491925348045a37b90b6497141003d Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/142719517) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3999)
 
   * 4fbb807550fa9836246bee589d1ef371f554028c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142720739) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4001)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] 
Cast string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569861765
 
 
   
   ## CI report:
   
   * 9bbb2830a6e6e185ae6a9d4a8d3e2b99c7648d9c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142717413) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3998)
 
   * 2068b01ca802c8c3a9b267aa951a14e2c55692a4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-30 Thread GitBox
wuchong commented on a change in pull request #10693: [FLINK-15334][table sql / 
api] Fix physical schema mapping in TableFormatFactoryBase to support define 
orderless computed column
URL: https://github.com/apache/flink/pull/10693#discussion_r362164397
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableFormatFactoryBase.java
 ##
 @@ -157,36 +156,33 @@ public static TableSchema deriveSchema(Map properties) {
final TableSchema.Builder builder = TableSchema.builder();
 
final TableSchema tableSchema = 
descriptorProperties.getTableSchema(SCHEMA);
-   final TableSchema physicalSchema = 
TableSchemaUtils.getPhysicalSchema(tableSchema);
-
-   final Map physicalIndices2Indices = 
Arrays.stream(physicalSchema.getFieldNames())
-   .collect(Collectors.toMap(
-   
Arrays.asList(physicalSchema.getFieldNames())::indexOf,
-   
Arrays.asList(tableSchema.getFieldNames())::indexOf));
-
-   for (int i = 0; i < physicalSchema.getFieldCount(); i++) {
-   final String fieldName = 
physicalSchema.getFieldNames()[i];
-   final DataType fieldType = 
physicalSchema.getFieldDataTypes()[i];
-
+   for (int i = 0; i < tableSchema.getFieldCount(); i++) {
+   final String fieldName = tableSchema.getFieldNames()[i];
+   final DataType fieldType = 
tableSchema.getFieldDataTypes()[i];
+
+   final Optional tableColumn = 
tableSchema.getTableColumn(fieldName);
+   final boolean isGeneratedColumn = 
tableColumn.isPresent() && tableColumn.get().isGenerated();
 
 Review comment:
   Please update here as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10700: [FLINK-15383][table sql / planner & legacy planner] Using sink schema field name instead of query schema field name for UpsertStreamTableSin

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10700: [FLINK-15383][table sql / planner & 
legacy planner] Using sink schema field name instead of query schema field name 
for UpsertStreamTableSink.
URL: https://github.com/apache/flink/pull/10700#issuecomment-569055468
 
 
   
   ## CI report:
   
   * 4e596c6982db0ff2416530e688d7f6d45329f465 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142375475) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3938)
 
   * bbb633a82d437b51b1551d550dc05d3bafcecab0 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142614951) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3974)
 
   * 9e65c9bd9f94a9741a8967f5ed7f8b926655dd26 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142644145) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3986)
 
   * 721deb991f908eb91ba6dc0622584f1ea76d45dc Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/142646604) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3988)
 
   * f54bb5b85d434c034686114b3b50655c062b340a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142648987) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3989)
 
   * 8cf3dd6ab802dcf2717c7495b41fa52861e39dae UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] 
Fix prune partition on DATE/TIME/TIMESTAMP columns
URL: https://github.com/apache/flink/pull/10704#issuecomment-569239989
 
 
   
   ## CI report:
   
   * de210eacfb754ef4d169bbfb50877d3e03e8c792 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142444279) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3954)
 
   * 749a0addc128db847dcc13b4494148474b50bee2 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142609801) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3970)
 
   * 30cee33e9aa2601b3266871a7f30dda41f8dc0a4 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142720734) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4000)
 
   * 1510d279b5317a3d968dd4245b90975c1269d30f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix 
physical schema mapping in TableFormatFactoryBase to support define orderless 
computed column
URL: https://github.com/apache/flink/pull/10693#issuecomment-568967236
 
 
   
   ## CI report:
   
   * a6b006a4d5fd8d8398d65f170d89e3fcda2f2105 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142348347) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3923)
 
   * 57edd55c4b44f33ebdda3082ed36d1fd62c2d2ae Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142717407) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3997)
 
   * a54c016397d009edebed862f421d56c1b3a5d8d1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2019-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-15443:
---

Assignee: Jingsong Lee

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2019-12-30 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005972#comment-17005972
 ] 

Jingsong Lee commented on FLINK-15443:
--

[~jark] I'd like to fix it.

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2019-12-30 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005969#comment-17005969
 ] 

Jingsong Lee edited comment on FLINK-15443 at 12/31/19 7:40 AM:


Thanks [~yesorno] for your reporting.Yes, there is a bug when use float in JDBC 
sink.

NOTE:
 * In flink SQL, we regard float as java float.
 * But in JDBC, real type is java float, float/double are java double.

We have dealt with data very well in JDBCUtils, but mismatch in JDBCTypeUtil 
to match java float to JDBC float.


was (Author: lzljs3620320):
Thanks [~yesorno] for your reporting.Yes, there is a bug when use float in JDBC 
sink.

NOTE:
 * In flink SQL, we regard float as java float.
 * But in JDBC, real type is java float, float/double are java double.

We have dealt with data very well in JDBCUtils, but mismatch in JDBCTypeUtil 
to match java float to JDBC float.

 

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2019-12-30 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005969#comment-17005969
 ] 

Jingsong Lee commented on FLINK-15443:
--

Thanks [~yesorno] for your reporting.Yes, there is a bug when use float in JDBC 
sink.

NOTE:
 * In flink SQL, we regard float as java float.
 * But in JDBC, real type is java float, float/double are java double.

We have dealt with data very well in JDBCUtils, but mismatch in JDBCTypeUtil 
to match java float to JDBC float.

 

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command 
for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569881700
 
 
   It looks that somehow CI tests are in PENDING state even both on Travis and 
Azure the tests have passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command 
for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569881531
 
 
   Thanks for the patch. LGTM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
becketqin commented on issue #10720: [FLINK-15428][e2e] Fix the error command 
for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569881484
 
 
   @flinkbot approve all


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] beyond1920 commented on a change in pull request #10694: [FLINK-15381] [table-planner-blink] correct collation derive logic on RelSubset in RelMdCollation

2019-12-30 Thread GitBox
beyond1920 commented on a change in pull request #10694: [FLINK-15381] 
[table-planner-blink] correct collation derive logic on RelSubset in 
RelMdCollation
URL: https://github.com/apache/flink/pull/10694#discussion_r362162286
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/metadata/FlinkRelMdCollation.java
 ##
 @@ -0,0 +1,557 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.metadata;
+
+import org.apache.calcite.adapter.enumerable.EnumerableCorrelate;
+import org.apache.calcite.adapter.enumerable.EnumerableHashJoin;
+import org.apache.calcite.adapter.enumerable.EnumerableMergeJoin;
+import org.apache.calcite.adapter.enumerable.EnumerableNestedLoopJoin;
+import org.apache.calcite.linq4j.Ord;
+import org.apache.calcite.plan.RelOptTable;
+import org.apache.calcite.plan.hep.HepRelVertex;
+import org.apache.calcite.plan.volcano.RelSubset;
+import org.apache.calcite.rel.RelCollation;
+import org.apache.calcite.rel.RelCollations;
+import org.apache.calcite.rel.RelFieldCollation;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Calc;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.core.Join;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Match;
+import org.apache.calcite.rel.core.Project;
+import org.apache.calcite.rel.core.Sort;
+import org.apache.calcite.rel.core.SortExchange;
+import org.apache.calcite.rel.core.TableModify;
+import org.apache.calcite.rel.core.TableScan;
+import org.apache.calcite.rel.core.Values;
+import org.apache.calcite.rel.core.Window;
+import org.apache.calcite.rel.metadata.BuiltInMetadata;
+import org.apache.calcite.rel.metadata.MetadataDef;
+import org.apache.calcite.rel.metadata.MetadataHandler;
+import org.apache.calcite.rel.metadata.ReflectiveRelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexCall;
+import org.apache.calcite.rex.RexCallBinding;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexLiteral;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexProgram;
+import org.apache.calcite.sql.validate.SqlMonotonicity;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.ImmutableBitSet;
+import org.apache.calcite.util.ImmutableIntList;
+import org.apache.calcite.util.Pair;
+import org.apache.calcite.util.Util;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.stream.Collectors;
+
+/**
+ * FlinkRelMdCollation supplies a default implementation of
+ * {@link org.apache.calcite.rel.metadata.RelMetadataQuery#collations}
+ * for the standard logical algebra.
+ */
+public class FlinkRelMdCollation implements 
MetadataHandler {
+   public static final RelMetadataProvider SOURCE =
+   
ReflectiveRelMetadataProvider.reflectiveSource(BuiltInMethod.COLLATIONS.method, 
new FlinkRelMdCollation());
+
+   //~ Constructors 
---
+
+   private FlinkRelMdCollation() {
+   }
+
+   //~ Methods 

+
+   public MetadataDef getDef() {
+   return BuiltInMetadata.Collation.DEF;
+   }
+
+   public com.google.common.collect.ImmutableList 
collations(TableScan scan, RelMetadataQuery mq) {
+   return 
com.google.common.collect.ImmutableList.copyOf(table(scan.getTable()));
+   }
+
+   public com.google.common.collect.ImmutableList 
collations(Values values, RelMetadataQuery mq) {
+   return 
com.google.common.collect.ImmutableList.copyOf(values(mq, values.getRowType(), 
values.getTuples()));
+   }
+
+   public com.google.common.collect.ImmutableList 
collations(Project project,
+   RelMetadataQuery mq) 

[GitHub] [flink] beyond1920 commented on a change in pull request #10694: [FLINK-15381] [table-planner-blink] correct collation derive logic on RelSubset in RelMdCollation

2019-12-30 Thread GitBox
beyond1920 commented on a change in pull request #10694: [FLINK-15381] 
[table-planner-blink] correct collation derive logic on RelSubset in 
RelMdCollation
URL: https://github.com/apache/flink/pull/10694#discussion_r362162371
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/metadata/FlinkRelMdCollation.java
 ##
 @@ -0,0 +1,557 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.metadata;
+
+import org.apache.calcite.adapter.enumerable.EnumerableCorrelate;
+import org.apache.calcite.adapter.enumerable.EnumerableHashJoin;
+import org.apache.calcite.adapter.enumerable.EnumerableMergeJoin;
+import org.apache.calcite.adapter.enumerable.EnumerableNestedLoopJoin;
+import org.apache.calcite.linq4j.Ord;
+import org.apache.calcite.plan.RelOptTable;
+import org.apache.calcite.plan.hep.HepRelVertex;
+import org.apache.calcite.plan.volcano.RelSubset;
+import org.apache.calcite.rel.RelCollation;
+import org.apache.calcite.rel.RelCollations;
+import org.apache.calcite.rel.RelFieldCollation;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Calc;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.core.Join;
+import org.apache.calcite.rel.core.JoinRelType;
+import org.apache.calcite.rel.core.Match;
+import org.apache.calcite.rel.core.Project;
+import org.apache.calcite.rel.core.Sort;
+import org.apache.calcite.rel.core.SortExchange;
+import org.apache.calcite.rel.core.TableModify;
+import org.apache.calcite.rel.core.TableScan;
+import org.apache.calcite.rel.core.Values;
+import org.apache.calcite.rel.core.Window;
+import org.apache.calcite.rel.metadata.BuiltInMetadata;
+import org.apache.calcite.rel.metadata.MetadataDef;
+import org.apache.calcite.rel.metadata.MetadataHandler;
+import org.apache.calcite.rel.metadata.ReflectiveRelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataProvider;
+import org.apache.calcite.rel.metadata.RelMetadataQuery;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexCall;
+import org.apache.calcite.rex.RexCallBinding;
+import org.apache.calcite.rex.RexInputRef;
+import org.apache.calcite.rex.RexLiteral;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.rex.RexProgram;
+import org.apache.calcite.sql.validate.SqlMonotonicity;
+import org.apache.calcite.util.BuiltInMethod;
+import org.apache.calcite.util.ImmutableBitSet;
+import org.apache.calcite.util.ImmutableIntList;
+import org.apache.calcite.util.Pair;
+import org.apache.calcite.util.Util;
+
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.stream.Collectors;
+
+/**
+ * FlinkRelMdCollation supplies a default implementation of
+ * {@link org.apache.calcite.rel.metadata.RelMetadataQuery#collations}
+ * for the standard logical algebra.
+ */
+public class FlinkRelMdCollation implements 
MetadataHandler {
 
 Review comment:
   Maybe we should add the comment in The code. It's really a long class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2019-12-30 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-15443:
-
Fix Version/s: 1.10.0
   1.9.2

> Use JDBC connector write FLOAT value occur ClassCastException
> -
>
> Key: FLINK-15443
> URL: https://issues.apache.org/jira/browse/FLINK-15443
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.9.1
> Environment: flink version is 1.9.1
>Reporter: Xianxun Ye
>Priority: Major
> Fix For: 1.9.2, 1.10.0
>
>
> I defined a float type field in mysql table, when I use jdbc connector write 
> float value into db, there are ClassCastException occurs.
> {code:java}
> //代码占位符
> Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1. Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.Caused by: 
> java.lang.ClassCastException: java.lang.Float cannot be cast to 
> java.lang.Double, field index: 6, field value: 0.1.  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
> org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
>  at 
> org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
>  at 
> org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15444) Make the component AbstractInvokable in CheckpointBarrierHandler NonNull

2019-12-30 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang updated FLINK-15444:
-
Fix Version/s: 1.11.0

> Make the component AbstractInvokable in CheckpointBarrierHandler NonNull 
> -
>
> Key: FLINK-15444
> URL: https://issues.apache.org/jira/browse/FLINK-15444
> Project: Flink
>  Issue Type: Task
>  Components: Runtime / Checkpointing
>Reporter: zhijiang
>Assignee: zhijiang
>Priority: Minor
> Fix For: 1.11.0
>
>
> The current component {{AbstractInvokable}} in {{CheckpointBarrierHandler}} 
> is annotated as {{@Nullable}}. Actually in real code path it is passed via 
> the constructor and never be null. The nullable annotation is only used for 
> unit test purpose. But this way would mislead the real usage in practice and 
> bring extra troubles, because you have to alway check whether it is null 
> before usage in related processes.
> We can refactor the related unit tests to implement a dummy 
> {{AbstractInvokable}} for tests and remove the {{@Nullable}} annotation from 
> the related class constructors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15444) Make the component AbstractInvokable in CheckpointBarrierHandler NonNull

2019-12-30 Thread zhijiang (Jira)
zhijiang created FLINK-15444:


 Summary: Make the component AbstractInvokable in 
CheckpointBarrierHandler NonNull 
 Key: FLINK-15444
 URL: https://issues.apache.org/jira/browse/FLINK-15444
 Project: Flink
  Issue Type: Task
  Components: Runtime / Checkpointing
Reporter: zhijiang
Assignee: zhijiang


The current component {{AbstractInvokable}} in {{CheckpointBarrierHandler}} is 
annotated as {{@Nullable}}. Actually in real code path it is passed via the 
constructor and never be null. The nullable annotation is only used for unit 
test purpose. But this way would mislead the real usage in practice and bring 
extra troubles, because you have to alway check whether it is null before usage 
in related processes.

We can refactor the related unit tests to implement a dummy 
{{AbstractInvokable}} for tests and remove the {{@Nullable}} annotation from 
the related class constructors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2019-12-30 Thread GitBox
flinkbot commented on issue #10730: [FLINK-14802][orc][hive] Multi vectorized 
read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730#issuecomment-569879082
 
 
   
   ## CI report:
   
   * 753a9d8bd5705954a67133f2780617ac936a8737 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15443) Use JDBC connector write FLOAT value occur ClassCastException

2019-12-30 Thread yexianxun (Jira)
yexianxun created FLINK-15443:
-

 Summary: Use JDBC connector write FLOAT value occur 
ClassCastException
 Key: FLINK-15443
 URL: https://issues.apache.org/jira/browse/FLINK-15443
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.9.1
 Environment: flink version is 1.9.1
Reporter: yexianxun


I defined a float type field in mysql table, when I use jdbc connector write 
float value into db, there are ClassCastException occurs.
{code:java}
//代码占位符
Caused by: java.lang.ClassCastException: java.lang.Float cannot be cast to 
java.lang.Double, field index: 6, field value: 0.1.Caused by: 
java.lang.ClassCastException: java.lang.Float cannot be cast to 
java.lang.Double, field index: 6, field value: 0.1. Caused by: 
java.lang.ClassCastException: java.lang.Float cannot be cast to 
java.lang.Double, field index: 6, field value: 0.1.Caused by: 
java.lang.ClassCastException: java.lang.Float cannot be cast to 
java.lang.Double, field index: 6, field value: 0.1.  at 
org.apache.flink.api.java.io.jdbc.JDBCUtils.setField(JDBCUtils.java:106)  at 
org.apache.flink.api.java.io.jdbc.JDBCUtils.setRecordToStatement(JDBCUtils.java:63)
 at 
org.apache.flink.api.java.io.jdbc.writer.AppendOnlyWriter.addRecord(AppendOnlyWriter.java:56)
 at 
org.apache.flink.api.java.io.jdbc.JDBCUpsertOutputFormat.writeRecord(JDBCUpsertOutputFormat.java:144)

{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #10729: [hotfix][runtime] Cleanup some checkpoint related codes

2019-12-30 Thread GitBox
flinkbot commented on issue #10729: [hotfix][runtime] Cleanup some checkpoint 
related codes
URL: https://github.com/apache/flink/pull/10729#issuecomment-569879066
 
 
   
   ## CI report:
   
   * e0673933a498a537f2144e268eaa44d5c98c7f19 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
wuchong commented on issue #10727: [FLINK-15420][table-planner-blink] Cast 
string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569878866
 
 
   But will it be a backward compatible problem? Because Calcite supports to 
cast `1999-9-10`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-30 Thread GitBox
JingsongLi commented on a change in pull request #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#discussion_r362160039
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -375,7 +375,7 @@ object GenerateUtils {
  |  $SQL_TIMESTAMP.fromEpochMillis(${ts.getMillisecond}L, 
${ts.getNanoOfMillisecond});
""".stripMargin
 ctx.addReusableMember(fieldTimestamp)
-generateNonNullLiteral(literalType, fieldTerm, literalType)
+generateNonNullLiteral(literalType, fieldTerm, ts)
 
 Review comment:
   Don't mix the planner's single line modification with hive's modification, 
and there is no test.
   You should add a single commit to explain why we need it and what is this 
bug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] leonardBang commented on a change in pull request #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed col

2019-12-30 Thread GitBox
leonardBang commented on a change in pull request #10693: [FLINK-15334][table 
sql / api] Fix physical schema mapping in TableFormatFactoryBase to support 
define orderless computed column
URL: https://github.com/apache/flink/pull/10693#discussion_r362160144
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/descriptors/SchemaValidator.java
 ##
 @@ -212,12 +212,16 @@ else if (proctimeFound) {
@Deprecated
public static TableSchema deriveTableSinkSchema(DescriptorProperties 
properties) {
TableSchema.Builder builder = TableSchema.builder();
-
-   TableSchema schema = 
TableSchemaUtils.getPhysicalSchema(properties.getTableSchema(SCHEMA));
-
-   for (int i = 0; i < schema.getFieldCount(); i++) {
-   TypeInformation t = schema.getFieldTypes()[i];
-   String n = schema.getFieldNames()[i];
+   TableSchema tableSchema = properties.getTableSchema(SCHEMA);
+   for (int i = 0; i < tableSchema.getFieldCount(); i++) {
+   TypeInformation t = tableSchema.getFieldTypes()[i];
+   String n = tableSchema.getFieldNames()[i];
+   Optional tableColumn = 
tableSchema.getTableColumn(n);
+   boolean isGeneratedColumn = tableColumn.isPresent() && 
tableColumn.get().isGenerated();
 
 Review comment:
   That's better, thanks for your kindly tips


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
docete commented on issue #10727: [FLINK-15420][table-planner-blink] Cast 
string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569875792
 
 
   The month field in `cast ('1999-9-10 05:20:10' as TIMESTAMP))` violate 
ISO-8601. IMO we should not support it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10728: [FLINK-15437][yarn] Apply dynamic properties early on client side.

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10728: [FLINK-15437][yarn] Apply dynamic 
properties early on client side.
URL: https://github.com/apache/flink/pull/10728#issuecomment-569870948
 
 
   
   ## CI report:
   
   * 4bae53183e1268380abdb6d6ad1f9c8b48b32d83 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142720750) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4002)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10726: [FLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error 
command for stopping kafka cluster and exclude kafka 1.10 related test under 
JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569611106
 
 
   
   ## CI report:
   
   * 4c2fa97968a87d2db180d51eac1d6169cb851137 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142622684) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3978)
 
   * a526d5c351382445b26ac28e5fad85cf12697cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142624580) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3979)
 
   * bbb3dc2587b5aae17bc50588d19ca74d7def3e1f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142641831) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3984)
 
   * a91b3928e4491925348045a37b90b6497141003d Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/142719517) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3999)
 
   * 4fbb807550fa9836246bee589d1ef371f554028c Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142720739) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4001)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add two utils for Table transformations

2019-12-30 Thread GitBox
qiuxiafei commented on a change in pull request #9373: [FLINK-13596][ml] Add 
two utils for Table transformations
URL: https://github.com/apache/flink/pull/9373#discussion_r362156769
 
 

 ##
 File path: 
flink-ml-parent/flink-ml-lib/src/main/java/org/apache/flink/ml/common/utils/DataSetConversionUtil.java
 ##
 @@ -0,0 +1,172 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.flink.ml.common.utils;
+
+import org.apache.flink.api.common.functions.MapFunction;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.DataSet;
+import org.apache.flink.api.java.operators.SingleInputUdfOperator;
+import org.apache.flink.api.java.operators.TwoInputUdfOperator;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.ml.common.MLEnvironment;
+import org.apache.flink.ml.common.MLEnvironmentFactory;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.types.Row;
+
+/**
+ * Provide functions of conversions between DataSet and Table.
+ */
+public class DataSetConversionUtil {
+   /**
+* Convert the given Table to {@link DataSet}<{@link Row}>.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param table the Table to convert.
+* @return the converted DataSet.
+*/
+   public static DataSet  fromTable(Long sessionId, Table table) {
+   return MLEnvironmentFactory
+   .get(sessionId)
+   .getBatchTableEnvironment()
+   .toDataSet(table, Row.class);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified TableSchema.
+*
+* @param sessionId the sessionId of {@link MLEnvironmentFactory}
+* @param data   the DataSet to convert.
+* @param schema the specified TableSchema.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
TableSchema schema) {
+   return toTable(sessionId, data, schema.getFieldNames(), 
schema.getFieldTypes());
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames, TypeInformation [] colTypes) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames, colTypes);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames.
+*
+* @param sessionId sessionId the sessionId of {@link 
MLEnvironmentFactory}.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @return the converted Table.
+*/
+   public static Table toTable(Long sessionId, DataSet  data, 
String[] colNames) {
+   return toTable(MLEnvironmentFactory.get(sessionId), data, 
colNames);
+   }
+
+   /**
+* Convert the given DataSet into a Table with specified colNames and 
colTypes.
+*
+* @param session the MLEnvironment using to convert DataSet to Table.
+* @param data the DataSet to convert.
+* @param colNames the specified colNames.
+* @param colTypes the specified colTypes. This variable is used only 
when the
+* DataSet is produced by a function and Flink cannot 
determine
+* automatically what the produced type is.
+* @return the converted Table.
+*/
+   public 

[GitHub] [flink] flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] 
Fix prune partition on DATE/TIME/TIMESTAMP columns
URL: https://github.com/apache/flink/pull/10704#issuecomment-569239989
 
 
   
   ## CI report:
   
   * de210eacfb754ef4d169bbfb50877d3e03e8c792 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142444279) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3954)
 
   * 749a0addc128db847dcc13b4494148474b50bee2 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142609801) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3970)
 
   * 30cee33e9aa2601b3266871a7f30dda41f8dc0a4 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142720734) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4000)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2019-12-30 Thread GitBox
flinkbot commented on issue #10730: [FLINK-14802][orc][hive] Multi vectorized 
read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730#issuecomment-569873920
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 753a9d8bd5705954a67133f2780617ac936a8737 (Tue Dec 31 
06:34:14 UTC 2019)
   
   **Warnings:**
* **3 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-14802) Multi vectorized read version support for hive orc read

2019-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14802:
---
Labels: pull-request-available  (was: )

> Multi vectorized read version support for hive orc read
> ---
>
> Key: FLINK-14802
> URL: https://issues.apache.org/jira/browse/FLINK-14802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Connectors / ORC
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> The vector api of Hive 1.x version are totally different from hive 2+.
> So we need introduce more efforts to support vectorized read for hive 1.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #10730: [FLINK-14802][orc][hive] Multi vectorized read version support for hive orc read

2019-12-30 Thread GitBox
JingsongLi opened a new pull request #10730: [FLINK-14802][orc][hive] Multi 
vectorized read version support for hive orc read
URL: https://github.com/apache/flink/pull/10730
 
 
   
   ## What is the purpose of the change
   
   The vector api of Hive 1.x version are totally different from hive 2+.
   So we need introduce more efforts to support vectorized read for hive 1.x.
   
   ## Brief change log
   
   - Introduce flink-orc-nohive module
   - Introduce orc vectors
   - Introduce NoHiveOrcShim
   - Introduce nohive orc OrcSplitReaderUtil
   - Integrate hive to flink-orc-nohive
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): yes
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): yes
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? JavaDocs


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-15421) GroupAggsHandler throws java.time.LocalDateTime cannot be cast to java.sql.Timestamp

2019-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15421.
-
Resolution: Fixed

1.11.0: ba4433540561ef942062c70eb6bce64c02d8a54a
1.10.0: f58a2ecf2c6a60c0c81f9ece13d58797407232fa 
1.9.2: ecd4e42d4980928655ec3ba2f1517d12c29a1d94

> GroupAggsHandler throws java.time.LocalDateTime cannot be cast to 
> java.sql.Timestamp
> 
>
> Key: FLINK-15421
> URL: https://issues.apache.org/jira/browse/FLINK-15421
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.1, 1.10.0
>Reporter: Benchao Li
>Assignee: Zhenghua Gao
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.2, 1.10.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> `TimestmapType` has two types of physical representation: `Timestamp` and 
> `LocalDateTime`. When we use following SQL, it will conflict each other:
> {code:java}
> SELECT 
>   SUM(cnt) as s, 
>   MAX(ts)
> FROM 
>   SELECT 
> `string`,
> `int`,
> COUNT(*) AS cnt,
> MAX(rowtime) as ts
>   FROM T1
>   GROUP BY `string`, `int`, TUMBLE(rowtime, INTERVAL '10' SECOND)
> GROUP BY `string`
> {code}
> with 'table.exec.emit.early-fire.enabled' = true.
> The exceptions is below:
> {quote}Caused by: java.lang.ClassCastException: java.time.LocalDateTime 
> cannot be cast to java.sql.Timestamp
>  at GroupAggsHandler$83.getValue(GroupAggsHandler$83.java:529)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:164)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>  at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>  at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
>  at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:488)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:702)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:527)
>  at java.lang.Thread.run(Thread.java:748)
> {quote}
> I also create a UT to quickly reproduce this bug in `WindowAggregateITCase`:
> {code:java}
> @Test
> def testEarlyFireWithTumblingWindow(): Unit = {
>   val stream = failingDataSource(data)
> .assignTimestampsAndWatermarks(
>   new TimestampAndWatermarkWithOffset
> [(Long, Int, Double, Float, BigDecimal, String, String)](10L))
>   val table = stream.toTable(tEnv,
> 'rowtime.rowtime, 'int, 'double, 'float, 'bigdec, 'string, 'name)
>   tEnv.registerTable("T1", table)
>   
> tEnv.getConfig.getConfiguration.setBoolean("table.exec.emit.early-fire.enabled",
>  true)
>   
> tEnv.getConfig.getConfiguration.setString("table.exec.emit.early-fire.delay", 
> "1000 ms")
>   val sql =
> """
>   |SELECT
>   |  SUM(cnt) as s,
>   |  MAX(ts)
>   |FROM
>   |  (SELECT
>   |`string`,
>   |`int`,
>   |COUNT(*) AS cnt,
>   |MAX(rowtime) as ts
>   |  FROM T1
>   |  GROUP BY `string`, `int`, TUMBLE(rowtime, INTERVAL '10' SECOND))
>   |GROUP BY `string`
>   |""".stripMargin
>   tEnv.sqlQuery(sql).toRetractStream[Row].print()
>   env.execute()
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14802) Multi vectorized read version support for hive orc read

2019-12-30 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-14802:
-
Fix Version/s: 1.11.0

> Multi vectorized read version support for hive orc read
> ---
>
> Key: FLINK-14802
> URL: https://issues.apache.org/jira/browse/FLINK-14802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Connectors / ORC
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
> Fix For: 1.11.0
>
>
> The vector api of Hive 1.x version are totally different from hive 2+.
> So we need introduce more efforts to support vectorized read for hive 1.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #10723: [FLINK-15421][table-planner-blink] Fix TimestampMaxAggFunction/TimestampMinAggFunction to accept LocalDateTime values

2019-12-30 Thread GitBox
wuchong merged pull request #10723: [FLINK-15421][table-planner-blink] Fix 
TimestampMaxAggFunction/TimestampMinAggFunction to accept LocalDateTime values
URL: https://github.com/apache/flink/pull/10723
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #10722: [FLINK-15421][table-planner-blink] Fix TimestampMaxAggFunction/Timest…

2019-12-30 Thread GitBox
wuchong merged pull request #10722: [FLINK-15421][table-planner-blink] Fix 
TimestampMaxAggFunction/Timest…
URL: https://github.com/apache/flink/pull/10722
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-15442) Harden the Avro Confluent Schema Registry nightly end-to-end test

2019-12-30 Thread zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhijiang reassigned FLINK-15442:


Assignee: Yangze Guo

> Harden the Avro Confluent Schema Registry nightly end-to-end test
> -
>
> Key: FLINK-15442
> URL: https://issues.apache.org/jira/browse/FLINK-15442
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> We have already harden the Avro Confluent Schema Registry test in 
> [FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, 
> there are still some defects in current mechanism.
> * The loop variable _i_ is not safe, it could be modified by the *command*.
> * The process of downloading kafka 0.10 is not included in the scope of 
> retry_times . I think we need to include it to tolerent transient network 
> issue.
> We need to fix those issue to harden the Avro Confluent Schema Registry 
> nightly end-to-end test.
> cc: [~trohrmann] [~chesnay]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15379) JDBC connector return wrong value if defined dataType contains precision

2019-12-30 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005947#comment-17005947
 ] 

Zhenghua Gao commented on FLINK-15379:
--

for c.timestamp6, the default format of TimestampType is 
java.time.LocalDateTime, so the return value of this column is correct.

for c.time6, blink planner only support Time(0) now and the the default format 
of TimeType is java.time.LocalTime. When the second field is ZERO, the output 
will ignore seconds. 

for the c.gdp column, I have no idea right now. Shall you share me the code and 
I will reproduce it locally.

> JDBC connector return wrong value if defined dataType contains precision
> 
>
> Key: FLINK-15379
> URL: https://issues.apache.org/jira/browse/FLINK-15379
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.10.0
>
>
> A mysql table like:
>  
> {code:java}
> // CREATE TABLE `currency` (
>   `currency_id` bigint(20) NOT NULL,
>   `currency_name` varchar(200) DEFAULT NULL,
>   `rate` double DEFAULT NULL,
>   `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
>   `country` varchar(100) DEFAULT NULL,
>   `timestamp6` timestamp(6) NULL DEFAULT NULL,
>   `time6` time(6) DEFAULT NULL,
>   `gdp` decimal(10,4) DEFAULT NULL,
>   PRIMARY KEY (`currency_id`)
> ) ENGINE=InnoDB DEFAULT CHARSET=utf8
> +-+---+--+-+-++-+--+
> | currency_id | currency_name | rate | currency_time   | country | 
> timestamp6 | time6   | gdp  |
> +-+---+--+-+-++-+--+
> |   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
> 2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
> |   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
> 2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
> +-+---+--+-+-++-+--+{code}
>  
> If user defined a jdbc table as  dimension table like:
>  
> {code:java}
> // 
> public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
> "  currency_id BIGINT,\n" +
> "  currency_name STRING,\n" +
> "  rate DOUBLE,\n" +
> "  currency_time TIMESTAMP(3),\n" +
> "  country STRING,\n" +
> "  timestamp6 TIMESTAMP(6),\n" +
> "  time6 TIME(6),\n" +
> "  gdp DECIMAL(10, 4)\n" +
> ") WITH (\n" +
> "   'connector.type' = 'jdbc',\n" +
> "   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
> "   'connector.username' = 'root'," +
> "   'connector.table' = 'currency',\n" +
> "   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
> "   'connector.lookup.cache.max-rows' = '500', \n" +
> "   'connector.lookup.cache.ttl' = '10s',\n" +
> "   'connector.lookup.max-retries' = '3'" +
> ")";
> {code}
>  
> User will get wrong value in column `timestamp6`,`time6`,`gdp`:
> {code:java}
> // c.currency_id, c.currency_name, c.rate, c.currency_time, c.country, 
> c.timestamp6, c.time6, c.gdp 
> 1,US 
> Dollar,1020.0,2019-12-20T17:23,America,2019-12-20T12:22:00.023456,12:22,-0.0001
> 2,Euro,114.0,2019-12-20T12:22,Germany,2019-12-20T12:22:00.023456,12:22,-0.0001
> 4,Yen,1.0,2019-12-20T12:22,Japan,2019-12-20T12:22:00.123456,12:22,-0.0001{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-14386) Support computed column for create table statement

2019-12-30 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reopened FLINK-14386:
--

> Support computed column for create table statement
> --
>
> Key: FLINK-14386
> URL: https://issues.apache.org/jira/browse/FLINK-14386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support syntax like:
> {code:sql}
> create table t(
>   a int,
>   b as a + 1,
>   c as my_udf(a)
> ) with (
>   ...
> )
> {code}
> The columns b and c are all computed(virtual) column for table t.
> More details: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-70%3A+Flink+SQL+Computed+Column+Design]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-14386) Support computed column for create table statement

2019-12-30 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-14386.

Resolution: Implemented

> Support computed column for create table statement
> --
>
> Key: FLINK-14386
> URL: https://issues.apache.org/jira/browse/FLINK-14386
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.10.0
>Reporter: Danny Chen
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Support syntax like:
> {code:sql}
> create table t(
>   a int,
>   b as a + 1,
>   c as my_udf(a)
> ) with (
>   ...
> )
> {code}
> The columns b and c are all computed(virtual) column for table t.
> More details: 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-70%3A+Flink+SQL+Computed+Column+Design]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15379) JDBC connector return wrong value if defined dataType contains precision

2019-12-30 Thread Zhenghua Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-15379:
-
Description: 
A mysql table like:

 
{code:java}
// CREATE TABLE `currency` (
  `currency_id` bigint(20) NOT NULL,
  `currency_name` varchar(200) DEFAULT NULL,
  `rate` double DEFAULT NULL,
  `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `country` varchar(100) DEFAULT NULL,
  `timestamp6` timestamp(6) NULL DEFAULT NULL,
  `time6` time(6) DEFAULT NULL,
  `gdp` decimal(10,4) DEFAULT NULL,
  PRIMARY KEY (`currency_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
+-+---+--+-+-++-+--+
| currency_id | currency_name | rate | currency_time   | country | 
timestamp6 | time6   | gdp  |
+-+---+--+-+-++-+--+
|   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
+-+---+--+-+-++-+--+{code}
 

If user defined a jdbc table as  dimension table like:

 
{code:java}
// 
public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp DECIMAL(10, 4)\n" +
") WITH (\n" +
"   'connector.type' = 'jdbc',\n" +
"   'connector.url' = 'jdbc:mysql://localhost:3306/test',\n" +
"   'connector.username' = 'root'," +
"   'connector.table' = 'currency',\n" +
"   'connector.driver' = 'com.mysql.jdbc.Driver',\n" +
"   'connector.lookup.cache.max-rows' = '500', \n" +
"   'connector.lookup.cache.ttl' = '10s',\n" +
"   'connector.lookup.max-retries' = '3'" +
")";
{code}
 

User will get wrong value in column `timestamp6`,`time6`,`gdp`:
{code:java}
// c.currency_id, c.currency_name, c.rate, c.currency_time, c.country, 
c.timestamp6, c.time6, c.gdp 

1,US 
Dollar,1020.0,2019-12-20T17:23,America,2019-12-20T12:22:00.023456,12:22,-0.0001
2,Euro,114.0,2019-12-20T12:22,Germany,2019-12-20T12:22:00.023456,12:22,-0.0001
4,Yen,1.0,2019-12-20T12:22,Japan,2019-12-20T12:22:00.123456,12:22,-0.0001{code}
 

  was:
A mysql table like:

 
{code:java}
// CREATE TABLE `currency` (
  `currency_id` bigint(20) NOT NULL,
  `currency_name` varchar(200) DEFAULT NULL,
  `rate` double DEFAULT NULL,
  `currency_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
  `country` varchar(100) DEFAULT NULL,
  `timestamp6` timestamp(6) NULL DEFAULT NULL,
  `time6` time(6) DEFAULT NULL,
  `gdp` decimal(10,4) DEFAULT NULL,
  PRIMARY KEY (`currency_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
+-+---+--+-+-++-+--+
| currency_id | currency_name | rate | currency_time   | country | 
timestamp6 | time6   | gdp  |
+-+---+--+-+-++-+--+
|   1 | US Dollar | 1020 | 2019-12-20 17:23:00 | America | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   2 | Euro  |  114 | 2019-12-20 12:22:00 | Germany | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   3 | RMB   |   16 | 2019-12-20 12:22:00 | China   | 
2019-12-20 12:22:00.023456 | 12:22:00.023456 | 100.0112 |
|   4 | Yen   |1 | 2019-12-20 12:22:00 | Japan   | 
2019-12-20 12:22:00.123456 | 12:22:00.123456 | 100.4112 |
+-+---+--+-+-++-+--+{code}
 

If user defined a jdbc table as  dimension table like:

 
{code:java}
// 
public static final String mysqlCurrencyDDL = "CREATE TABLE currency (\n" +
"  currency_id BIGINT,\n" +
"  currency_name STRING,\n" +
"  rate DOUBLE,\n" +
"  currency_time TIMESTAMP(3),\n" +
"  country STRING,\n" +
"  timestamp6 TIMESTAMP(6),\n" +
"  time6 TIME(6),\n" +
"  gdp 

[GitHub] [flink] flinkbot commented on issue #10729: [hotfix][runtime] Cleanup some checkpoint related codes

2019-12-30 Thread GitBox
flinkbot commented on issue #10729: [hotfix][runtime] Cleanup some checkpoint 
related codes
URL: https://github.com/apache/flink/pull/10729#issuecomment-569872279
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e0673933a498a537f2144e268eaa44d5c98c7f19 (Tue Dec 31 
06:19:16 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW opened a new pull request #10729: [hotfix][runtime] Cleanup some checkpoint related codes

2019-12-30 Thread GitBox
zhijiangW opened a new pull request #10729: [hotfix][runtime] Cleanup some 
checkpoint related codes
URL: https://github.com/apache/flink/pull/10729
 
 
   ## What is the purpose of the change
   
   *Cleanup some checkpoint related codes*
   
   ## Brief change log
   
 - *Remove invalid comment from CheckpointedInputGate*
 - *Remove reductant close method from BufferStorage interface*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15442) Harden the Avro Confluent Schema Registry nightly end-to-end test

2019-12-30 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-15442:
---
Issue Type: Bug  (was: Test)

> Harden the Avro Confluent Schema Registry nightly end-to-end test
> -
>
> Key: FLINK-15442
> URL: https://issues.apache.org/jira/browse/FLINK-15442
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> We have already harden the Avro Confluent Schema Registry test in 
> [FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, 
> there are still some defects in current mechanism.
> * The loop variable _i_ is not safe, it could be modified by the *command*.
> * The process of downloading kafka 0.10 is not included in the scope of 
> retry_times . I think we need to include it to tolerent transient network 
> issue.
> We need to fix those issue to harden the Avro Confluent Schema Registry 
> nightly end-to-end test.
> cc: [~trohrmann] [~chesnay]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15247) Closing (Testing)MiniCluster may cause ConcurrentModificationException

2019-12-30 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005943#comment-17005943
 ] 

Congxian Qiu(klion26) commented on FLINK-15247:
---

another instance [https://travis-ci.com/flink-ci/flink/jobs/271335452]

> Closing (Testing)MiniCluster may cause ConcurrentModificationException
> --
>
> Key: FLINK-15247
> URL: https://issues.apache.org/jira/browse/FLINK-15247
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.10.0
>Reporter: Gary Yao
>Assignee: Andrey Zagrebin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {noformat}
> Test 
> operatorsBecomeBackPressured(org.apache.flink.test.streaming.runtime.BackPressureITCase)
>  failed with:
> org.apache.flink.util.FlinkException: Could not close resource.
> at 
> org.apache.flink.util.AutoCloseableAsync.close(AutoCloseableAsync.java:42)org.apache.flink.test.streaming.runtime.BackPressureITCase.tearDown(BackPressureITCase.java:165)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at 
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> at 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Caused by: org.apache.flink.util.FlinkException: Error while shutting the 
> TaskExecutor down.
> at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.handleOnStopException(TaskExecutor.java:397)
> at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.lambda$onStop$0(TaskExecutor.java:382)
> at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
> at 
> 

[GitHub] [flink] flinkbot commented on issue #10728: [FLINK-15437][yarn] Apply dynamic properties early on client side.

2019-12-30 Thread GitBox
flinkbot commented on issue #10728: [FLINK-15437][yarn] Apply dynamic 
properties early on client side.
URL: https://github.com/apache/flink/pull/10728#issuecomment-569870948
 
 
   
   ## CI report:
   
   * 4bae53183e1268380abdb6d6ad1f9c8b48b32d83 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] 
Cast string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569861765
 
 
   
   ## CI report:
   
   * 9bbb2830a6e6e185ae6a9d4a8d3e2b99c7648d9c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142717413) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3998)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error 
command for stopping kafka cluster and exclude kafka 1.10 related test under 
JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569611106
 
 
   
   ## CI report:
   
   * 4c2fa97968a87d2db180d51eac1d6169cb851137 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142622684) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3978)
 
   * a526d5c351382445b26ac28e5fad85cf12697cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142624580) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3979)
 
   * bbb3dc2587b5aae17bc50588d19ca74d7def3e1f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142641831) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3984)
 
   * a91b3928e4491925348045a37b90b6497141003d Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142719517) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3999)
 
   * 4fbb807550fa9836246bee589d1ef371f554028c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10704: [FLINK-15411][table-planner-blink] 
Fix prune partition on DATE/TIME/TIMESTAMP columns
URL: https://github.com/apache/flink/pull/10704#issuecomment-569239989
 
 
   
   ## CI report:
   
   * de210eacfb754ef4d169bbfb50877d3e03e8c792 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142444279) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3954)
 
   * 749a0addc128db847dcc13b4494148474b50bee2 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142609801) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3970)
 
   * 30cee33e9aa2601b3266871a7f30dda41f8dc0a4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix 
physical schema mapping in TableFormatFactoryBase to support define orderless 
computed column
URL: https://github.com/apache/flink/pull/10693#issuecomment-568967236
 
 
   
   ## CI report:
   
   * a6b006a4d5fd8d8398d65f170d89e3fcda2f2105 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142348347) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3923)
 
   * 57edd55c4b44f33ebdda3082ed36d1fd62c2d2ae Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142717407) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3997)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] klion26 commented on issue #10726: [FLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-30 Thread GitBox
klion26 commented on issue #10726: [FLINK-15427][Statebackend][test] Check TTL 
test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569870438
 
 
   @flinkbot run travis
   @flinkbot run azure


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KarmaGYZ commented on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
KarmaGYZ commented on issue #10720: [FLINK-15428][e2e] Fix the error command 
for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569868952
 
 
   Travis gives green light to relevant test.
   https://travis-ci.org/KarmaGYZ/flink/builds/631191277


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2019-12-30 Thread GitBox
wuchong commented on a change in pull request #10704: 
[FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP 
columns
URL: https://github.com/apache/flink/pull/10704#discussion_r362147386
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/utils/RexNodeExtractorTest.scala
 ##
 @@ -820,8 +820,8 @@ class RexNodeExtractorTest extends RexNodeTestBase {
 rexBuilder,
 Array("date")
   )
-assertTrue(partitionPredicate1.isAlwaysTrue)
-assertEquals(c3, nonPartitionPredicate1)
+assertEquals(c2, partitionPredicate1)
+assertEquals(c1, nonPartitionPredicate1)
 
 // date is not supported
 
 Review comment:
   Remove this comment?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10704: [FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP columns

2019-12-30 Thread GitBox
wuchong commented on a change in pull request #10704: 
[FLINK-15411][table-planner-blink] Fix prune partition on DATE/TIME/TIMESTAMP 
columns
URL: https://github.com/apache/flink/pull/10704#discussion_r362152117
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/expressions/TemporalTypesTest.scala
 ##
 @@ -783,6 +783,10 @@ class TemporalTypesTest extends ExpressionTestBase {
 testSqlApi(timestampTz("2018-03-14 19:01:02.123"), "2018-03-14 
19:01:02.123")
 testSqlApi(timestampTz("2018-03-14 19:00:00.010"), "2018-03-14 
19:00:00.01")
 
+testSqlApi(
+  timestampTz("2018-03-14 19:00:00.010") + " > " + timestampTz("2018-03-14 
19:00:00.012"),
 
 Review comment:
   It seems that this test doesn't reproduce the bug. Could use replace one of 
the constants to a field reference? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error command for stopping kafka cluster and exclude kafka 1.10 related test under JDK profile

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10720: [FLINK-15428][e2e] Fix the error 
command for stopping kafka cluster and exclude kafka 1.10 related test under 
JDK profile
URL: https://github.com/apache/flink/pull/10720#issuecomment-569611106
 
 
   
   ## CI report:
   
   * 4c2fa97968a87d2db180d51eac1d6169cb851137 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142622684) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3978)
 
   * a526d5c351382445b26ac28e5fad85cf12697cbc Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142624580) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3979)
 
   * bbb3dc2587b5aae17bc50588d19ca74d7def3e1f Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142641831) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3984)
 
   * a91b3928e4491925348045a37b90b6497141003d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] 
Cast string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569861765
 
 
   
   ## CI report:
   
   * 9bbb2830a6e6e185ae6a9d4a8d3e2b99c7648d9c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142717413) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3998)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] Fix TimestampMaxAggFunction/Timest…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] 
Fix TimestampMaxAggFunction/Timest…
URL: https://github.com/apache/flink/pull/10722#issuecomment-569636753
 
 
   
   ## CI report:
   
   * d940052615fe001bb881b0e4ba4fb6e6423ef7ec Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142631823) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3981)
 
   * 897dc8ace2189c22dc4dc2f312d552c83b724626 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142716227) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3996)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15442) Harden the Avro Confluent Schema Registry nightly end-to-end test

2019-12-30 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-15442:
---
Description: 
We have already harden the Avro Confluent Schema Registry test in 
[FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, there 
are still some defects in current mechanism.
* The loop variable _i_ is not safe, it could be modified by the *command*.
* The process of downloading kafka 0.10 is not included in the scope of 
retry_times . I think we need to include it to tolerent transient network issue.

We need to fix those issue to harden the Avro Confluent Schema Registry nightly 
end-to-end test.

cc: [~trohrmann] [~chesnay]

  was:
We have already harden the Avro Confluent Schema Registry test in 
[FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, there 
are still some defects in current mechanism.
* There is missing .sh at the end of ./bin/kafka-server-stop . This cause the 
cleanup command fail and produce the error log in 
[FLINK-15428|https://issues.apache.org/jira/browse/FLINK-15428].
* The loop variable _i_ is not safe, it could be modified by the *command*.
* The process of downloading kafka 0.10 is not included in the scope of 
retry_times . I think we need to include it to tolerent transient network issue.

We need to fix those issue to harden the Avro Confluent Schema Registry nightly 
end-to-end test.

cc: [~trohrmann] [~chesnay]


> Harden the Avro Confluent Schema Registry nightly end-to-end test
> -
>
> Key: FLINK-15442
> URL: https://issues.apache.org/jira/browse/FLINK-15442
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> We have already harden the Avro Confluent Schema Registry test in 
> [FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, 
> there are still some defects in current mechanism.
> * The loop variable _i_ is not safe, it could be modified by the *command*.
> * The process of downloading kafka 0.10 is not included in the scope of 
> retry_times . I think we need to include it to tolerent transient network 
> issue.
> We need to fix those issue to harden the Avro Confluent Schema Registry 
> nightly end-to-end test.
> cc: [~trohrmann] [~chesnay]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-14802) Multi vectorized read version support for hive orc read

2019-12-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-14802:
---

Assignee: Jingsong Lee

> Multi vectorized read version support for hive orc read
> ---
>
> Key: FLINK-14802
> URL: https://issues.apache.org/jira/browse/FLINK-14802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Connectors / ORC
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>
> The vector api of Hive 1.x version are totally different from hive 2+.
> So we need introduce more efforts to support vectorized read for hive 1.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on issue #10728: [FLINK-15437][yarn] Apply dynamic properties early on client side.

2019-12-30 Thread GitBox
xintongsong commented on issue #10728: [FLINK-15437][yarn] Apply dynamic 
properties early on client side.
URL: https://github.com/apache/flink/pull/10728#issuecomment-569865874
 
 
   cc @wangyang0918 @TisonKun @kl0u 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10728: [FLINK-15437][yarn] Apply dynamic properties early on client side.

2019-12-30 Thread GitBox
flinkbot commented on issue #10728: [FLINK-15437][yarn] Apply dynamic 
properties early on client side.
URL: https://github.com/apache/flink/pull/10728#issuecomment-569865897
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 322a65907fc3ee936a1ceca930e5427b235e6dc4 (Tue Dec 31 
05:18:13 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15437) Start session with property of "-Dtaskmanager.memory.process.size" not work

2019-12-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15437:
---
Labels: pull-request-available  (was: )

> Start session with property of "-Dtaskmanager.memory.process.size" not work
> ---
>
> Key: FLINK-15437
> URL: https://issues.apache.org/jira/browse/FLINK-15437
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Assignee: Xintong Song
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>
> *The environment:*
> Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
> "taskmanager.memory.process.size":
> export HADOOP_CLASSPATH=`hadoop classpath`;export 
> HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
>  export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
> $BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
> daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
> -Dtaskmanager.memory.process.size=1024m
> *After execute the cmd above, there is a exception like this:*
> 2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy   
>   - Connecting to ResourceManager at 
> z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
> 2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - Error while running the Flink session.
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> 
>  The program finished with the following exception:
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> *The flink-conf.yaml is :*
> jobmanager.rpc.address: localhost
> jobmanager.rpc.port: 6123
> jobmanager.heap.size: 1024m
> taskmanager.memory.total-process.size: 1024m
> taskmanager.numberOfTaskSlots: 1
> parallelism.default: 1
> jobmanager.execution.failover-strategy: region



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong opened a new pull request #10728: [FLINK-15437][yarn] Apply dynamic properties early on client side.

2019-12-30 Thread GitBox
xintongsong opened a new pull request #10728: [FLINK-15437][yarn] Apply dynamic 
properties early on client side.
URL: https://github.com/apache/flink/pull/10728
 
 
   ## What is the purpose of the change
   
   This PR applies dynamic properties early on client side, to make sure the 
client uses the correct configurations set via dynamic properties.
   
   ## Brief change log
   
   - 475b6421a1d94a5073e177d217311d9f195307cd: Write yarn properties file 
without `YarnClusterDescriptor`.
   - 322a65907fc3ee936a1ceca930e5427b235e6dc4: Apply dynamic properties early 
on client side.
   
   ## Verifying this change
   
   - Updated `FlinkYarnSessionCliTest#testDynamicProperties`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-15428) Avro Confluent Schema Registry nightly end-to-end test fails on travis

2019-12-30 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005932#comment-17005932
 ] 

Yangze Guo commented on FLINK-15428:


For not blocking the release progress, I open 
[FLINK-15442|https://issues.apache.org/jira/browse/FLINK-15442] to address some 
lower level issue related to this test. 

I'll only fix the root cause of the failure under this ticket.

> Avro Confluent Schema Registry nightly end-to-end test fails on travis
> --
>
> Key: FLINK-15428
> URL: https://issues.apache.org/jira/browse/FLINK-15428
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
>Reporter: Yu Li
>Assignee: Yangze Guo
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Avro Confluent Schema Registry nightly end-to-end test fails with below error:
> {code}
> Could not start confluent schema registry
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/kafka-common.sh:
>  line 78: ./bin/kafka-server-stop: No such file or directory
> No zookeeper server to stop
> Tried to kill 1549 but never saw it die
> [FAIL] Test script contains errors.
> {code}
> https://api.travis-ci.org/v3/job/629699437/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15442) Harden the Avro Confluent Schema Registry nightly end-to-end test

2019-12-30 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17005931#comment-17005931
 ] 

Yangze Guo commented on FLINK-15442:


Could someone kindly assign this to me?

> Harden the Avro Confluent Schema Registry nightly end-to-end test
> -
>
> Key: FLINK-15442
> URL: https://issues.apache.org/jira/browse/FLINK-15442
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: Yangze Guo
>Priority: Critical
> Fix For: 1.10.0
>
>
> We have already harden the Avro Confluent Schema Registry test in 
> [FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, 
> there are still some defects in current mechanism.
> * There is missing .sh at the end of ./bin/kafka-server-stop . This cause the 
> cleanup command fail and produce the error log in 
> [FLINK-15428|https://issues.apache.org/jira/browse/FLINK-15428].
> * The loop variable _i_ is not safe, it could be modified by the *command*.
> * The process of downloading kafka 0.10 is not included in the scope of 
> retry_times . I think we need to include it to tolerent transient network 
> issue.
> We need to fix those issue to harden the Avro Confluent Schema Registry 
> nightly end-to-end test.
> cc: [~trohrmann] [~chesnay]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] Fix TimestampMaxAggFunction/Timest…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] 
Fix TimestampMaxAggFunction/Timest…
URL: https://github.com/apache/flink/pull/10722#issuecomment-569636753
 
 
   
   ## CI report:
   
   * d940052615fe001bb881b0e4ba4fb6e6423ef7ec Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142631823) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3981)
 
   * 897dc8ace2189c22dc4dc2f312d552c83b724626 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142716227) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3996)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15442) Harden the Avro Confluent Schema Registry nightly end-to-end test

2019-12-30 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-15442:
--

 Summary: Harden the Avro Confluent Schema Registry nightly 
end-to-end test
 Key: FLINK-15442
 URL: https://issues.apache.org/jira/browse/FLINK-15442
 Project: Flink
  Issue Type: Test
  Components: Tests
Reporter: Yangze Guo
 Fix For: 1.10.0


We have already harden the Avro Confluent Schema Registry test in 
[FLINK-13567|https://issues.apache.org/jira/browse/FLINK-13567]. However, there 
are still some defects in current mechanism.
* There is missing .sh at the end of ./bin/kafka-server-stop . This cause the 
cleanup command fail and produce the error log in 
[FLINK-15428|https://issues.apache.org/jira/browse/FLINK-15428].
* The loop variable _i_ is not safe, it could be modified by the *command*.
* The process of downloading kafka 0.10 is not included in the scope of 
retry_times . I think we need to include it to tolerent transient network issue.

We need to fix those issue to harden the Avro Confluent Schema Registry nightly 
end-to-end test.

cc: [~trohrmann] [~chesnay]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10727: [FLINK-15420][table-planner-blink] 
Cast string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569861765
 
 
   
   ## CI report:
   
   * 9bbb2830a6e6e185ae6a9d4a8d3e2b99c7648d9c Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142717413) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3998)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #10721: [FLINK-15429][hive] HiveObjectConversion implementations need to hand…

2019-12-30 Thread GitBox
xuefuz commented on a change in pull request #10721: [FLINK-15429][hive] 
HiveObjectConversion implementations need to hand…
URL: https://github.com/apache/flink/pull/10721#discussion_r362148281
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShimV100.java
 ##
 @@ -354,6 +354,9 @@ public CatalogColumnStatisticsDataDate 
toFlinkDateColStats(ColumnStatisticsData
 
@Override
public Object toHiveTimestamp(Object flinkTimestamp) {
+   if (flinkTimestamp == null) {
 
 Review comment:
   Sounds reasonable to me to do the check here..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix 
physical schema mapping in TableFormatFactoryBase to support define orderless 
computed column
URL: https://github.com/apache/flink/pull/10693#issuecomment-568967236
 
 
   
   ## CI report:
   
   * a6b006a4d5fd8d8398d65f170d89e3fcda2f2105 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142348347) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3923)
 
   * 57edd55c4b44f33ebdda3082ed36d1fd62c2d2ae Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142717407) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3997)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-30 Thread GitBox
xuefuz commented on a change in pull request #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#discussion_r362147570
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -375,7 +375,7 @@ object GenerateUtils {
  |  $SQL_TIMESTAMP.fromEpochMillis(${ts.getMillisecond}L, 
${ts.getNanoOfMillisecond});
""".stripMargin
 ctx.addReusableMember(fieldTimestamp)
-generateNonNullLiteral(literalType, fieldTerm, literalType)
+generateNonNullLiteral(literalType, fieldTerm, ts)
 
 Review comment:
   What's this change for? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #10625: [FLINK-15259][hive] HiveInspector.toInspectors() should convert Flink…

2019-12-30 Thread GitBox
xuefuz commented on a change in pull request #10625: [FLINK-15259][hive] 
HiveInspector.toInspectors() should convert Flink…
URL: https://github.com/apache/flink/pull/10625#discussion_r362146373
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/client/HiveShim.java
 ##
 @@ -232,4 +233,10 @@ SimpleGenericUDAFParameterInfo 
createUDAFParameterInfo(ObjectInspector[] params,
 * Converts a hive date instance to LocalDate which is expected by 
DataFormatConverter.
 */
LocalDate toFlinkDate(Object hiveDate);
+
+   /**
+* Converts a Hive primitive java object to corresponding Writable 
object. Throws CatalogException if the conversion
+* is not supported.
+*/
+   Writable hivePrimitiveToWritable(Object value) throws CatalogException;
 
 Review comment:
   Throwing a catalog ex doesn't seem very intuitive, even though CatalogEx is 
also a runtime exception. Maybe just remove CatalogEx from the signature and 
throw a runtime exception when problem occurs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] Fix TimestampMaxAggFunction/Timest…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] 
Fix TimestampMaxAggFunction/Timest…
URL: https://github.com/apache/flink/pull/10722#issuecomment-569636753
 
 
   
   ## CI report:
   
   * d940052615fe001bb881b0e4ba4fb6e6423ef7ec Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142631823) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3981)
 
   * 897dc8ace2189c22dc4dc2f312d552c83b724626 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142716227) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3996)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #10727: [FLINK-15420][table-planner-blink] Cast string to timestamp will loos…

2019-12-30 Thread GitBox
flinkbot commented on issue #10727: [FLINK-15420][table-planner-blink] Cast 
string to timestamp will loos…
URL: https://github.com/apache/flink/pull/10727#issuecomment-569861765
 
 
   
   ## CI report:
   
   * 9bbb2830a6e6e185ae6a9d4a8d3e2b99c7648d9c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10713: [FLINK-15431][CEP] Add numLateRecordsDropped/lateRecordsDroppedRate/watermarkLatency in CepOperator

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10713: [FLINK-15431][CEP] Add 
numLateRecordsDropped/lateRecordsDroppedRate/watermarkLatency in CepOperator
URL: https://github.com/apache/flink/pull/10713#issuecomment-569414138
 
 
   
   ## CI report:
   
   * 51c9f2e1d9ceb65ac6c2ea8ade81e98647d69068 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142523058) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3966)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10693: [FLINK-15334][table sql / api] Fix 
physical schema mapping in TableFormatFactoryBase to support define orderless 
computed column
URL: https://github.com/apache/flink/pull/10693#issuecomment-568967236
 
 
   
   ## CI report:
   
   * a6b006a4d5fd8d8398d65f170d89e3fcda2f2105 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/142348347) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3923)
 
   * 57edd55c4b44f33ebdda3082ed36d1fd62c2d2ae UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10693: [FLINK-15334][table sql / api] Fix physical schema mapping in TableFormatFactoryBase to support define orderless computed column

2019-12-30 Thread GitBox
wuchong commented on a change in pull request #10693: [FLINK-15334][table sql / 
api] Fix physical schema mapping in TableFormatFactoryBase to support define 
orderless computed column
URL: https://github.com/apache/flink/pull/10693#discussion_r362146236
 
 

 ##
 File path: 
flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/descriptors/SchemaValidator.java
 ##
 @@ -212,12 +212,16 @@ else if (proctimeFound) {
@Deprecated
public static TableSchema deriveTableSinkSchema(DescriptorProperties 
properties) {
TableSchema.Builder builder = TableSchema.builder();
-
-   TableSchema schema = 
TableSchemaUtils.getPhysicalSchema(properties.getTableSchema(SCHEMA));
-
-   for (int i = 0; i < schema.getFieldCount(); i++) {
-   TypeInformation t = schema.getFieldTypes()[i];
-   String n = schema.getFieldNames()[i];
+   TableSchema tableSchema = properties.getTableSchema(SCHEMA);
+   for (int i = 0; i < tableSchema.getFieldCount(); i++) {
+   TypeInformation t = tableSchema.getFieldTypes()[i];
+   String n = tableSchema.getFieldNames()[i];
+   Optional tableColumn = 
tableSchema.getTableColumn(n);
+   boolean isGeneratedColumn = tableColumn.isPresent() && 
tableColumn.get().isGenerated();
 
 Review comment:
   Improve the code a bit:
   
   ```java
   final TableColumn tableColumn = tableSchema.getTableColumns().get(i);
   final String fieldName = tableColumn.getName();
   final DataType fieldType = tableColumn.getType();
   final boolean isGeneratedColumn = tableColumn.isGenerated();
   ```
   We do know it's safe to call `tableSchema.getTableColumns().get(i)`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15437) Start session with property of "-Dtaskmanager.memory.process.size" not work

2019-12-30 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-15437:
-
Component/s: (was: API / Core)
 Deployment / YARN
 Command Line Client

> Start session with property of "-Dtaskmanager.memory.process.size" not work
> ---
>
> Key: FLINK-15437
> URL: https://issues.apache.org/jira/browse/FLINK-15437
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.10.0
>Reporter: xiaojin.wy
>Assignee: Xintong Song
>Priority: Critical
> Fix For: 1.10.0
>
>
> *The environment:*
> Yarn session cmd is as below, and the flink-conf.yaml has not the property of 
> "taskmanager.memory.process.size":
> export HADOOP_CLASSPATH=`hadoop classpath`;export 
> HADOOP_CONF_DIR=/dump/1/jenkins/workspace/Stream-Spark-3.4/env/hadoop_conf_dirs/blinktest2;
>  export BLINK_HOME=/dump/1/jenkins/workspace/test/blink_daily; 
> $BLINK_HOME/bin/yarn-session.sh -d -qu root.default -nm 'Session Cluster of 
> daily_regression_stream_spark_1.10' -jm 1024 -n 20 -s 10 
> -Dtaskmanager.memory.process.size=1024m
> *After execute the cmd above, there is a exception like this:*
> 2019-12-30 17:54:57,992 INFO  org.apache.hadoop.yarn.client.RMProxy   
>   - Connecting to ResourceManager at 
> z05c07224.sqa.zth.tbsite.net/11.163.188.36:8050
> 2019-12-30 17:54:58,182 ERROR org.apache.flink.yarn.cli.FlinkYarnSessionCli   
>   - Error while running the Flink session.
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> 
>  The program finished with the following exception:
> org.apache.flink.configuration.IllegalConfigurationException: Either Task 
> Heap Memory size (taskmanager.memory.task.heap.size) and Managed Memory size 
> (taskmanager.memory.managed.size), or Total Flink Memory size 
> (taskmanager.memory.flink.size), or Total Process Memory size 
> (taskmanager.memory.process.size) need to be configured explicitly.
>   at 
> org.apache.flink.runtime.clusterframework.TaskExecutorResourceUtils.resourceSpecFromConfig(TaskExecutorResourceUtils.java:145)
>   at 
> org.apache.flink.client.deployment.AbstractClusterClientFactory.getClusterSpecification(AbstractClusterClientFactory.java:44)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.run(FlinkYarnSessionCli.java:557)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.lambda$main$5(FlinkYarnSessionCli.java:803)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1804)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at 
> org.apache.flink.yarn.cli.FlinkYarnSessionCli.main(FlinkYarnSessionCli.java:803)
> *The flink-conf.yaml is :*
> jobmanager.rpc.address: localhost
> jobmanager.rpc.port: 6123
> jobmanager.heap.size: 1024m
> taskmanager.memory.total-process.size: 1024m
> taskmanager.numberOfTaskSlots: 1
> parallelism.default: 1
> jobmanager.execution.failover-strategy: region



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #10714: [FLINK-15409]Add semicolon after WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-30 Thread GitBox
wuchong commented on a change in pull request #10714: [FLINK-15409]Add 
semicolon after WindowJoinUtil#generateJoinFunction 
'$collectorTerm.collect($joinedRow)' statement
URL: https://github.com/apache/flink/pull/10714#discussion_r362145291
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/WindowJoinUtil.scala
 ##
 @@ -127,7 +127,7 @@ object WindowJoinUtil {
   case None =>
 s"""
|$buildJoinedRow
-   |$collectorTerm.collect($joinedRow)
+   |$collectorTerm.collect($joinedRow);
 
 Review comment:
   I looked into the code and find that we will never enter this branch under 
the current implementation, because `otherCondition` is never null (always 
contains join key). That means we can't reproduce this problem currenlty. If we 
want to reproduce this probem, we have to refactor the window join (esp. 
`otherCondition`). However, this is a major work.
   
   My suggestion would be: simply add the semicolon and do not add tests, and 
refactor window join in following issues, I just create FLINK-15441 to track 
this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #10714: [FLINK-15409]Add semicolon after WindowJoinUtil#generateJoinFunction '$collectorTerm.collect($joinedRow)' statement

2019-12-30 Thread GitBox
wuchong commented on a change in pull request #10714: [FLINK-15409]Add 
semicolon after WindowJoinUtil#generateJoinFunction 
'$collectorTerm.collect($joinedRow)' statement
URL: https://github.com/apache/flink/pull/10714#discussion_r362144456
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/join/WindowJoinTest.scala
 ##
 @@ -418,6 +421,20 @@ class WindowJoinTest extends TableTestBase {
   ">($2, $6)")
   }
 
+  @Test
+  def testJoinFunctionGenerate(): Unit ={
 
 Review comment:
   Please remove this test. `WindowJoinTest` is used to verify plans. Tests for 
code generation should not be put in this class. And IMO, this test doesn't 
reproduce the problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-15441) Refactor StreamExecWindowJoin to extends CommonPhysicalJoin

2019-12-30 Thread Jark Wu (Jira)
Jark Wu created FLINK-15441:
---

 Summary: Refactor StreamExecWindowJoin to extends 
CommonPhysicalJoin
 Key: FLINK-15441
 URL: https://issues.apache.org/jira/browse/FLINK-15441
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Jark Wu


Currently, {{StreamExecWindowJoin}} put join keys in the {{remainCondition}}, 
this will generate redundant condition for join keys, and results in [this code 
branch 
|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/WindowJoinUtil.scala#L128]
 is never reached. 

I better desgin is have a new {{joinCondition}} which removes the window bounds 
condition, and generate condition for non-equal conditions in 
{{joinCondition}}, and make {{StreamExecWindowJoin}} extends 
{{CommonPhysicalJoin}} . We should also take filter null join keys into 
account. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15231) Wrong HeapVector in AbstractHeapVector.createHeapColumn

2019-12-30 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-15231:
---
Component/s: (was: Table SQL / Planner)
 Table SQL / Runtime

> Wrong HeapVector in AbstractHeapVector.createHeapColumn
> ---
>
> Key: FLINK-15231
> URL: https://issues.apache.org/jira/browse/FLINK-15231
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For TIMESTAMP WITHOUT TIME ZONE/TIMESTAMP WITH LOCAL TIME ZONE/DECIMAL types, 
> AbstractHeapVector.createHeapColumn generates wrong HeapVectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15231) Wrong HeapVector in AbstractHeapVector.createHeapColumn

2019-12-30 Thread Kurt Young (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-15231.
--
Fix Version/s: (was: 1.11.0)
   1.10.0
   Resolution: Fixed

master: 9dc252849966dd21279572afff55dcbdd3f77f35

1.10.0: 62f0303f07120c51317aff171f105ecb0c65a2be

> Wrong HeapVector in AbstractHeapVector.createHeapColumn
> ---
>
> Key: FLINK-15231
> URL: https://issues.apache.org/jira/browse/FLINK-15231
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For TIMESTAMP WITHOUT TIME ZONE/TIMESTAMP WITH LOCAL TIME ZONE/DECIMAL types, 
> AbstractHeapVector.createHeapColumn generates wrong HeapVectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #10726: [BLINK-15427][Statebackend][test] Check TTL test in test_stream_statettl.sh and skip the exception check

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10726: [BLINK-15427][Statebackend][test] 
Check TTL test in test_stream_statettl.sh and skip the exception check
URL: https://github.com/apache/flink/pull/10726#issuecomment-569852183
 
 
   
   ## CI report:
   
   * 461a27735c3956818ea691074ee7a80bc8c5351b Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142713534) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3995)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] Fix TimestampMaxAggFunction/Timest…

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10722: [FLINK-15421][table-planner-blink] 
Fix TimestampMaxAggFunction/Timest…
URL: https://github.com/apache/flink/pull/10722#issuecomment-569636753
 
 
   
   ## CI report:
   
   * d940052615fe001bb881b0e4ba4fb6e6423ef7ec Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/142631823) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3981)
 
   * 897dc8ace2189c22dc4dc2f312d552c83b724626 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #10713: [FLINK-15431][CEP] Add numLateRecordsDropped/lateRecordsDroppedRate/watermarkLatency in CepOperator

2019-12-30 Thread GitBox
flinkbot edited a comment on issue #10713: [FLINK-15431][CEP] Add 
numLateRecordsDropped/lateRecordsDroppedRate/watermarkLatency in CepOperator
URL: https://github.com/apache/flink/pull/10713#issuecomment-569414138
 
 
   
   ## CI report:
   
   * 51c9f2e1d9ceb65ac6c2ea8ade81e98647d69068 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/142523058) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=3966)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   >