[jira] [Created] (FLINK-16823) The functioin TIMESTAMPDIFF doesn't perform expected result

2020-03-26 Thread Adam N D DENG (Jira)
Adam N D DENG created FLINK-16823:
-

 Summary: The functioin TIMESTAMPDIFF doesn't perform expected 
result
 Key: FLINK-16823
 URL: https://issues.apache.org/jira/browse/FLINK-16823
 Project: Flink
  Issue Type: Bug
Reporter: Adam N D DENG
 Attachments: image-2020-03-27-13-50-51-955.png

For example,

In mysql bellow sql get result 6, but in flink the output is 5

SELECT timestampdiff (MONTH, TIMESTAMP '2019-09-01 00:00:00',TIMESTAMP 
'2020-03-01 00:00:00' )

 

!image-2020-03-27-13-50-51-955.png!

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc]  JDBCInputFormat 
doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#issuecomment-604708338
 
 
   
   ## CI report:
   
   * 53dd72eeafb1a1ce4c70e9ba8b886f751e515c22 UNKNOWN
   * 5b06155ad2d2011098e960c66f9b10f42329d4fa Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155738771) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6719)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11544: [FLINK-16822] [sql-client] `table.xx` property set from CLI should also be set into SessionState's TableConfig

2020-03-26 Thread GitBox
flinkbot commented on issue #11544: [FLINK-16822] [sql-client] `table.xx` 
property set from CLI should also be set into SessionState's TableConfig
URL: https://github.com/apache/flink/pull/11544#issuecomment-604828941
 
 
   
   ## CI report:
   
   * 961e5d7c690f82967b92b01df5eee5fa75292c77 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11541: [FLINK-15416][network] add task manager netty client retry mechenism

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11541: [FLINK-15416][network] add task 
manager netty client retry mechenism
URL: https://github.com/apache/flink/pull/11541#issuecomment-604812212
 
 
   
   ## CI report:
   
   * 3bc439b7c92f5e764033133845ac1ff2a2b14b91 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155738776) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6720)
 
   * 0e228f36cdfc4610efd4da91d5964ffa05202c79 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read Hive table with RegexSerDe

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read 
Hive table with RegexSerDe
URL: https://github.com/apache/flink/pull/11504#issuecomment-603693689
 
 
   
   ## CI report:
   
   * e670931736b229bf5477d19fe2905f458ae236a6 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155735228) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6714)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11543: [FLINK-16672][python] Support Counter, Gauge, Meter, Distribution metric type for Python UDF

2020-03-26 Thread GitBox
flinkbot commented on issue #11543: [FLINK-16672][python] Support Counter, 
Gauge, Meter, Distribution metric type for Python UDF
URL: https://github.com/apache/flink/pull/11543#issuecomment-604828903
 
 
   
   ## CI report:
   
   * fd01ad835fabf33de968810666e9b97b494e7c60 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11436: [FLINK-11404][web] add load more feature in exception page

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11436: [FLINK-11404][web] add load more 
feature in exception page
URL: https://github.com/apache/flink/pull/11436#issuecomment-600498156
 
 
   
   ## CI report:
   
   * 968f55dbffb11b37b7a1a12147c9dcf6374b015b Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155738739) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6718)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11544: [FLINK-16822] [sql-client] `table.xx` property set from CLI should also be set into SessionState's TableConfig

2020-03-26 Thread GitBox
flinkbot commented on issue #11544: [FLINK-16822] [sql-client] `table.xx` 
property set from CLI should also be set into SessionState's TableConfig
URL: https://github.com/apache/flink/pull/11544#issuecomment-604826518
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 961e5d7c690f82967b92b01df5eee5fa75292c77 (Fri Mar 27 
05:45:48 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16822).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe opened a new pull request #11544: [FLINK-16822] [sql-client] `table.xx` property set from CLI should also be set into SessionState's TableConfig

2020-03-26 Thread GitBox
godfreyhe opened a new pull request #11544: [FLINK-16822] [sql-client] 
`table.xx` property set from CLI should also be set into SessionState's 
TableConfig
URL: https://github.com/apache/flink/pull/11544
 
 
   
   
   ## What is the purpose of the change
   
   *The config set by SET command does not work, the reason is `table.xx` 
property set from CLI does not be set into SessionState's TableConfig.*
   
   
   ## Brief change log
   
 - *Update table config of SessionState when calling setSessionProperty 
method*
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
 - *Added testSetSessionProperties in LocalExecutorITCase to verify the bug*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11543: [FLINK-16672][python] Support Counter, Gauge, Meter, Distribution metric type for Python UDF

2020-03-26 Thread GitBox
flinkbot commented on issue #11543: [FLINK-16672][python] Support Counter, 
Gauge, Meter, Distribution metric type for Python UDF
URL: https://github.com/apache/flink/pull/11543#issuecomment-604826066
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit fd01ad835fabf33de968810666e9b97b494e7c60 (Fri Mar 27 
05:43:58 UTC 2020)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16822) The config set by SET command does not work

2020-03-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16822:
---
Labels: pull-request-available  (was: )

> The config set by SET command does not work
> ---
>
> Key: FLINK-16822
> URL: https://issues.apache.org/jira/browse/FLINK-16822
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Users can add or change the properties for execution behavior through SET 
> command in SQL client CLI, e.g. {{SET execution.parallelism=10}}, {{SET 
> table.optimizer.join-reorder-enabled=true}}. But the {{table.xx}} config 
> can't change the TableEnvironment behavior, because the property set from CLI 
> does not be set into TableEnvironment's table config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16672) Support Counter, Gauge, Meter, Distribution metric type for Python UDF

2020-03-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16672:
---
Labels: pull-request-available  (was: )

> Support Counter, Gauge, Meter, Distribution metric type for Python UDF
> --
>
> Key: FLINK-16672
> URL: https://issues.apache.org/jira/browse/FLINK-16672
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python
>Reporter: Hequn Cheng
>Assignee: Hequn Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Support Counter, Gauge, Meter, Distribution metric type for Python UDF



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 opened a new pull request #11543: [FLINK-16672][python] Support Counter, Gauge, Meter, Distribution metric type for Python UDF

2020-03-26 Thread GitBox
hequn8128 opened a new pull request #11543: [FLINK-16672][python] Support 
Counter, Gauge, Meter, Distribution metric type for Python UDF
URL: https://github.com/apache/flink/pull/11543
 
 
   
   ## What is the purpose of the change
   
   This pull request add supports for Counter, Gauge, Meter, Distribution 
metric type for Python UDF.
   
   ## Brief change log
   
 - Adds Counter, Gauge, Meter, Distribution metric interface for Python.
 - Register metrics from Python to Java.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Add tests in test_metric.py to verify the metric for Python.
 - Add `FlinkMetricContainerTest` to verify metric paser and registration 
for Java.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (PythonDocs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc]  JDBCInputFormat 
doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#issuecomment-604708338
 
 
   
   ## CI report:
   
   * da455c908da1c388b2f8bbfb82ce378dbbfe959d Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/155726075) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6706)
 
   * 53dd72eeafb1a1ce4c70e9ba8b886f751e515c22 UNKNOWN
   * 5b06155ad2d2011098e960c66f9b10f42329d4fa Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155738771) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6719)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11524: [FLINK-16803][hive] Need to make sure partition inherit table spec wh…

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11524: [FLINK-16803][hive] Need to make 
sure partition inherit table spec wh…
URL: https://github.com/apache/flink/pull/11524#issuecomment-604351593
 
 
   
   ## CI report:
   
   * 0430a4524f511b4d070cd79666f0fb58d26eb981 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155737886) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6717)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11498: [FLINK-16741][WEB] add tm log list & tm log detail page

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11498: [FLINK-16741][WEB] add tm log list & 
tm log detail page
URL: https://github.com/apache/flink/pull/11498#issuecomment-603142183
 
 
   
   ## CI report:
   
   * e9da7e73bf36159aafa48bd229827bdd5d52b944 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155737879) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6716)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16822) The config set by SET command does not work

2020-03-26 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-16822:
---
Description: Users can add or change the properties for execution behavior 
through SET command in SQL client CLI, e.g. {{SET execution.parallelism=10}}, 
{{SET table.optimizer.join-reorder-enabled=true}}. But the {{table.xx}} config 
can't change the TableEnvironment behavior, because the property set from CLI 
does not be set into TableEnvironment's table config.  (was: Users can add or 
change the properties for execution behavior through SET command in SQL client, 
e.g. {{SET execution.parallelism=10}}, {{SET 
table.optimizer.join-reorder-enabled=true}}. But the {{table.xx}} config can't 
change the TableEnvironment behavior, because the property set from CLI does 
not be set into TableEnvironment's table config.)

> The config set by SET command does not work
> ---
>
> Key: FLINK-16822
> URL: https://issues.apache.org/jira/browse/FLINK-16822
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.11.0
>
>
> Users can add or change the properties for execution behavior through SET 
> command in SQL client CLI, e.g. {{SET execution.parallelism=10}}, {{SET 
> table.optimizer.join-reorder-enabled=true}}. But the {{table.xx}} config 
> can't change the TableEnvironment behavior, because the property set from CLI 
> does not be set into TableEnvironment's table config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16822) The config set by SET command does not work

2020-03-26 Thread godfrey he (Jira)
godfrey he created FLINK-16822:
--

 Summary: The config set by SET command does not work
 Key: FLINK-16822
 URL: https://issues.apache.org/jira/browse/FLINK-16822
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.10.0
Reporter: godfrey he
 Fix For: 1.11.0


Users can add or change the properties for execution behavior through SET 
command in SQL client, e.g. {{SET execution.parallelism=10}}, {{SET 
table.optimizer.join-reorder-enabled=true}}. But the {{table.xx}} config can't 
change the TableEnvironment behavior, because the property set from CLI does 
not be set into TableEnvironment's table config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16819) Got KryoException while using UDAF in flink1.9

2020-03-26 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068282#comment-17068282
 ] 

Jark Wu commented on FLINK-16819:
-

Hi [~neighborhood], this should be caused by FLINK-13702 and is fixed in 1.9.2. 
Did you tried 1.9.2 or 1.10?

> Got KryoException while using UDAF in flink1.9
> --
>
> Key: FLINK-16819
> URL: https://issues.apache.org/jira/browse/FLINK-16819
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System, Table SQL / Planner
>Affects Versions: 1.9.1
> Environment: Flink1.9.1
> Apache hadoop 2.7.2
>Reporter: Xingxing Di
>Priority: Major
>
> Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to 
> flink1.9 , most jobs works fine, but some jobs got  KryoExceptions. 
> We found that UDAF will trigger this exception, btw ,we are using blink 
> planner.
> *Here is the full stack traces:*
>  2020-03-27 11:46:55
>  com.esotericsoftware.kryo.KryoException: 
> java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  Serialization trace:
>  seed (java.util.Random)
>  gen (com.tdunning.math.stats.AVLTreeDigest)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>  at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
>  at 
> org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:536)
>  at 
> org.apache.flink.table.dataformat.BinaryGeneric.getJavaObjectFromBinaryGeneric(BinaryGeneric.java:86)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:628)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:633)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:320)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1293)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1257)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:302)
>  at GroupAggsHandler$71.setAccumulators(Unknown Source)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>  at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  at java.util.ArrayList.rangeCheck(ArrayList.java:657)
>  at java.util.ArrayList.get(ArrayList.java:433)
>  at 
> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:677)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>  ... 26 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW commented on a change in pull request #11507: [FLINK-16587] Add basic CheckpointBarrierHandler for unaligned checkpoint

2020-03-26 Thread GitBox
zhijiangW commented on a change in pull request #11507: [FLINK-16587] Add basic 
CheckpointBarrierHandler for unaligned checkpoint
URL: https://github.com/apache/flink/pull/11507#discussion_r399043562
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/CheckpointBarrierUnaligner.java
 ##
 @@ -0,0 +1,242 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.runtime.io;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.runtime.checkpoint.channel.ChannelStateWriter;
+import org.apache.flink.runtime.checkpoint.channel.InputChannelInfo;
+import org.apache.flink.runtime.io.network.api.CancelCheckpointMarker;
+import org.apache.flink.runtime.io.network.api.CheckpointBarrier;
+import org.apache.flink.runtime.io.network.buffer.Buffer;
+import org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import javax.annotation.Nullable;
+
+import java.util.Arrays;
+import java.util.concurrent.CompletableFuture;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * {@link CheckpointBarrierUnaligner} is used for triggering checkpoint while 
reading the first barrier
+ * and keeping track of the number of received barriers and consumed barriers.
+ */
+@Internal
+public class CheckpointBarrierUnaligner extends CheckpointBarrierHandler {
 
 Review comment:
   I prefer a bit `CheckpointBarrierUnaligner` -> 
`UnalignedCheckpointBarrierHandler` :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10114) Support Orc for StreamingFileSink

2020-03-26 Thread Sivaprasanna Sethuraman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068281#comment-17068281
 ] 

Sivaprasanna Sethuraman commented on FLINK-10114:
-

[~kkl0u] [~gaoyunhaii] Appreciate if you guys take a look at the attached 
document and/or the PR. Thanks.

> Support Orc for StreamingFileSink
> -
>
> Key: FLINK-10114
> URL: https://issues.apache.org/jira/browse/FLINK-10114
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: zhangminglei
>Assignee: Sivaprasanna Sethuraman
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc]  JDBCInputFormat 
doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#issuecomment-604708338
 
 
   
   ## CI report:
   
   * da455c908da1c388b2f8bbfb82ce378dbbfe959d Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/155726075) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6706)
 
   * 53dd72eeafb1a1ce4c70e9ba8b886f751e515c22 UNKNOWN
   * 5b06155ad2d2011098e960c66f9b10f42329d4fa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11539: [FLINK-16800][table-common] Deal with nested types in TypeMappingUtils#checkIfCompatible

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11539: [FLINK-16800][table-common] Deal 
with nested types in TypeMappingUtils#checkIfCompatible
URL: https://github.com/apache/flink/pull/11539#issuecomment-604788215
 
 
   
   ## CI report:
   
   * bfaf6ddb417806aba9b586dee0b3197d3fb50e7b Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155731846) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6711)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11540: [FLINK-16099] Translate "HiveCatalog" page of "Hive Integration" into…

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11540: [FLINK-16099] Translate 
"HiveCatalog" page of "Hive Integration" into…
URL: https://github.com/apache/flink/pull/11540#issuecomment-604804466
 
 
   
   ## CI report:
   
   * 31c00a13b2bb7f15b4fa594b5c9148f971769e86 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155736840) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6715)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11542: [FLINK-16303][rest] Enable retrieval of custom JobManager log files

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11542: [FLINK-16303][rest] Enable retrieval 
of custom JobManager log files
URL: https://github.com/apache/flink/pull/11542#issuecomment-604812411
 
 
   
   ## CI report:
   
   * 6c590310abbd4e8bedb23538a98e3db87adb82be Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155738829) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6721)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11541: [FLINK-15416][network] add task manager netty client retry mechenism

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11541: [FLINK-15416][network] add task 
manager netty client retry mechenism
URL: https://github.com/apache/flink/pull/11541#issuecomment-604812212
 
 
   
   ## CI report:
   
   * 3bc439b7c92f5e764033133845ac1ff2a2b14b91 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155738776) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6720)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read Hive table with RegexSerDe

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read 
Hive table with RegexSerDe
URL: https://github.com/apache/flink/pull/11504#issuecomment-603693689
 
 
   
   ## CI report:
   
   * e670931736b229bf5477d19fe2905f458ae236a6 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155735228) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6714)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11436: [FLINK-11404][web] add load more feature in exception page

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11436: [FLINK-11404][web] add load more 
feature in exception page
URL: https://github.com/apache/flink/pull/11436#issuecomment-600498156
 
 
   
   ## CI report:
   
   * 8d4fa0649cb023f5ca7cd088f6b2d91bbda9dbef Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/153874953) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6354)
 
   * 968f55dbffb11b37b7a1a12147c9dcf6374b015b Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155738739) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6718)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] melotlee commented on a change in pull request #11359: [FLINK-16095] [docs-zh] Translate "Modules" page of "Table API & SQL" into Chinese

2020-03-26 Thread GitBox
melotlee commented on a change in pull request #11359: [FLINK-16095] [docs-zh] 
Translate "Modules" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/11359#discussion_r399041119
 
 

 ##
 File path: docs/dev/table/modules.zh.md
 ##
 @@ -22,51 +22,41 @@ KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
 -->
+模块让用户能够对 Flink 内置对象进行扩展。例如,通过自定义函数扩展内置函数,这些自定义函数和内置函数没有区别。模块是可插拔的,Flink 
已经提供了一些预构建的模块,用户还可以实现自己的模块。
 
 Review comment:
   OK, free translation is better than literal translation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on a change in pull request #11507: [FLINK-16587] Add basic CheckpointBarrierHandler for unaligned checkpoint

2020-03-26 Thread GitBox
zhijiangW commented on a change in pull request #11507: [FLINK-16587] Add basic 
CheckpointBarrierHandler for unaligned checkpoint
URL: https://github.com/apache/flink/pull/11507#discussion_r399038839
 
 

 ##
 File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/StreamTaskNetworkInput.java
 ##
 @@ -197,6 +205,29 @@ public int getInputIndex() {
return checkpointedInputGate.getAvailableFuture();
}
 
+   @Override
+   public CompletableFuture prepareSnapshot(long checkpointId) throws 
IOException {
+   // Note that if considering recovery in future, we should 
guarantee that the spilled buffers in one channel
+   // should be close together because one record might span 
multiple buffers.
+   for (int channelIndex = 0; channelIndex < 
recordDeserializers.length; channelIndex++) {
+   final InputChannel channel = 
checkpointedInputGate.getChannel(channelIndex);
+
+   
recordDeserializers[channelIndex].getUnconsumedBuffer().ifPresent(buffer ->
 
 Review comment:
   The partial buffer in `RecordDeserializer` should be treated same as the 
inflight buffers in `RemoteInputChannel` queue. When the given checkpoint is 
triggered, this partial buffer is also overtaken actually. If there are 
multiple checkpoints concurrent, this partial buffer might belong to multiple 
checkpoint states, then that brings the concern of duplicated persistence like 
incremental checkpoint.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11542: [FLINK-16303][rest] Enable retrieval of custom JobManager log files

2020-03-26 Thread GitBox
flinkbot commented on issue #11542: [FLINK-16303][rest] Enable retrieval of 
custom JobManager log files
URL: https://github.com/apache/flink/pull/11542#issuecomment-604812411
 
 
   
   ## CI report:
   
   * 6c590310abbd4e8bedb23538a98e3db87adb82be UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11541: [FLINK-15416][network] add task manager netty client retry mechenism

2020-03-26 Thread GitBox
flinkbot commented on issue #11541: [FLINK-15416][network] add task manager 
netty client retry mechenism
URL: https://github.com/apache/flink/pull/11541#issuecomment-604812212
 
 
   
   ## CI report:
   
   * 3bc439b7c92f5e764033133845ac1ff2a2b14b91 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11498: [FLINK-16741][WEB] add tm log list & tm log detail page

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11498: [FLINK-16741][WEB] add tm log list & 
tm log detail page
URL: https://github.com/apache/flink/pull/11498#issuecomment-603142183
 
 
   
   ## CI report:
   
   * 14e40cd9acef9f1328644eab91f8cf63a3e73775 Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6572)
 
   * e9da7e73bf36159aafa48bd229827bdd5d52b944 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155737879) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6716)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11436: [FLINK-11404][web] add load more feature in exception page

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11436: [FLINK-11404][web] add load more 
feature in exception page
URL: https://github.com/apache/flink/pull/11436#issuecomment-600498156
 
 
   
   ## CI report:
   
   * 8d4fa0649cb023f5ca7cd088f6b2d91bbda9dbef Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/153874953) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6354)
 
   * 968f55dbffb11b37b7a1a12147c9dcf6374b015b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11538: [FLINK-16813][jdbc]  JDBCInputFormat 
doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#issuecomment-604708338
 
 
   
   ## CI report:
   
   * da455c908da1c388b2f8bbfb82ce378dbbfe959d Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/155726075) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6706)
 
   * 53dd72eeafb1a1ce4c70e9ba8b886f751e515c22 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11524: [FLINK-16803][hive] Need to make sure partition inherit table spec wh…

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11524: [FLINK-16803][hive] Need to make 
sure partition inherit table spec wh…
URL: https://github.com/apache/flink/pull/11524#issuecomment-604351593
 
 
   
   ## CI report:
   
   * 985fa46da9bba98a0ddd34788edc34a404be321f Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/155454445) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6672)
 
   * 0430a4524f511b4d070cd79666f0fb58d26eb981 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155737886) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6717)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16819) Got KryoException while using UDAF in flink1.9

2020-03-26 Thread Lsw_aka_laplace (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068264#comment-17068264
 ] 

Lsw_aka_laplace commented on FLINK-16819:
-

also I can make sure that it happend when the job tries to do checkpointing

> Got KryoException while using UDAF in flink1.9
> --
>
> Key: FLINK-16819
> URL: https://issues.apache.org/jira/browse/FLINK-16819
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System, Table SQL / Planner
>Affects Versions: 1.9.1
> Environment: Flink1.9.1
> Apache hadoop 2.7.2
>Reporter: Xingxing Di
>Priority: Major
>
> Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to 
> flink1.9 , most jobs works fine, but some jobs got  KryoExceptions. 
> We found that UDAF will trigger this exception, btw ,we are using blink 
> planner.
> *Here is the full stack traces:*
>  2020-03-27 11:46:55
>  com.esotericsoftware.kryo.KryoException: 
> java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  Serialization trace:
>  seed (java.util.Random)
>  gen (com.tdunning.math.stats.AVLTreeDigest)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>  at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
>  at 
> org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:536)
>  at 
> org.apache.flink.table.dataformat.BinaryGeneric.getJavaObjectFromBinaryGeneric(BinaryGeneric.java:86)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:628)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:633)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:320)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1293)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1257)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:302)
>  at GroupAggsHandler$71.setAccumulators(Unknown Source)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>  at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  at java.util.ArrayList.rangeCheck(ArrayList.java:657)
>  at java.util.ArrayList.get(ArrayList.java:433)
>  at 
> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:677)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>  ... 26 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16819) Got KryoException while using UDAF in flink1.9

2020-03-26 Thread Lsw_aka_laplace (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068264#comment-17068264
 ] 

Lsw_aka_laplace edited comment on FLINK-16819 at 3/27/20, 4:42 AM:
---

also I can make sure that it happens when the job tries to do checkpointing


was (Author: neighborhood):
also I can make sure that it happend when the job tries to do checkpointing

> Got KryoException while using UDAF in flink1.9
> --
>
> Key: FLINK-16819
> URL: https://issues.apache.org/jira/browse/FLINK-16819
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System, Table SQL / Planner
>Affects Versions: 1.9.1
> Environment: Flink1.9.1
> Apache hadoop 2.7.2
>Reporter: Xingxing Di
>Priority: Major
>
> Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to 
> flink1.9 , most jobs works fine, but some jobs got  KryoExceptions. 
> We found that UDAF will trigger this exception, btw ,we are using blink 
> planner.
> *Here is the full stack traces:*
>  2020-03-27 11:46:55
>  com.esotericsoftware.kryo.KryoException: 
> java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  Serialization trace:
>  seed (java.util.Random)
>  gen (com.tdunning.math.stats.AVLTreeDigest)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>  at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
>  at 
> org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:536)
>  at 
> org.apache.flink.table.dataformat.BinaryGeneric.getJavaObjectFromBinaryGeneric(BinaryGeneric.java:86)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:628)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:633)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:320)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1293)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1257)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:302)
>  at GroupAggsHandler$71.setAccumulators(Unknown Source)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>  at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  at java.util.ArrayList.rangeCheck(ArrayList.java:657)
>  at java.util.ArrayList.get(ArrayList.java:433)
>  at 
> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:677)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>  ... 26 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16819) Got KryoException while using UDAF in flink1.9

2020-03-26 Thread Lsw_aka_laplace (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068261#comment-17068261
 ] 

Lsw_aka_laplace commented on FLINK-16819:
-

[~jark] hi Jark, this is the bug(or not) I mentioned  before, actually we found 
that there are several UDAFs encountering Kyro serialization exception 
everytime we try to migrate to Flink 1.9.1 from Flink 1.7.2. We also tried  to 
register our own specialized serializer for these UDAFs, but unforunately, it 
didn't work. we are look forward to the solution or tips to fix this kind of 
problem cuz our users cannot their jobs on Flink 1.9  due to this problem. ANY 
opinion is welcomed.

Tkx

> Got KryoException while using UDAF in flink1.9
> --
>
> Key: FLINK-16819
> URL: https://issues.apache.org/jira/browse/FLINK-16819
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System, Table SQL / Planner
>Affects Versions: 1.9.1
> Environment: Flink1.9.1
> Apache hadoop 2.7.2
>Reporter: Xingxing Di
>Priority: Major
>
> Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to 
> flink1.9 , most jobs works fine, but some jobs got  KryoExceptions. 
> We found that UDAF will trigger this exception, btw ,we are using blink 
> planner.
> *Here is the full stack traces:*
>  2020-03-27 11:46:55
>  com.esotericsoftware.kryo.KryoException: 
> java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  Serialization trace:
>  seed (java.util.Random)
>  gen (com.tdunning.math.stats.AVLTreeDigest)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>  at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>  at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>  at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>  at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
>  at 
> org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:536)
>  at 
> org.apache.flink.table.dataformat.BinaryGeneric.getJavaObjectFromBinaryGeneric(BinaryGeneric.java:86)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:628)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:633)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:320)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1293)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1257)
>  at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:302)
>  at GroupAggsHandler$71.setAccumulators(Unknown Source)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
>  at 
> org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
>  at 
> org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
>  at java.util.ArrayList.rangeCheck(ArrayList.java:657)
>  at java.util.ArrayList.get(ArrayList.java:433)
>  at 
> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>  at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
>  at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:677)
>  at 
> 

[jira] [Updated] (FLINK-16005) Support yarn and hadoop config override

2020-03-26 Thread Zhenqiu Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenqiu Huang updated FLINK-16005:
--
Summary: Support yarn and hadoop config override   (was: Propagate 
yarn.application.classpath from client to TaskManager Classpath)

> Support yarn and hadoop config override 
> 
>
> Key: FLINK-16005
> URL: https://issues.apache.org/jira/browse/FLINK-16005
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Zhenqiu Huang
>Priority: Major
>
> When Flink users want to override the hadoop yarn container classpath, they 
> should just specify the yarn.application.classpath in yarn-site.xml from cli 
> side. But currently, the classpath setting can only be used in flink 
> application master, the classpath of TM is still determined by the setting in 
> yarn host.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] yjshen commented on issue #10455: [FLINK-15089][connectors] Puslar catalog

2020-03-26 Thread GitBox
yjshen commented on issue #10455: [FLINK-15089][connectors] Puslar catalog
URL: https://github.com/apache/flink/pull/10455#issuecomment-604809588
 
 
   @bowenli86 Hi Bowen, I've updated the PR with tests. Could you please help 
review it? Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-26 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068258#comment-17068258
 ] 

Zhijiang commented on FLINK-16821:
--

Another instance 
[https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6709=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]

> Run Kubernetes test failed with invalid named "minikube"
> 
>
> Key: FLINK-16821
> URL: https://issues.apache.org/jira/browse/FLINK-16821
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Zhijiang
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> This is the test run 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> Log output
> {code:java}
> 2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
> 2020-03-27T00:07:38.956Z 
> ==
> 2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
> 2020-03-27T00:07:41.7529865Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.7721475Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-27T00:07:42.5359301Z Installing minikube ...
> 2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
> TimeTime Time  Current
> 2020-03-27T00:07:42.5494729Z  Dload  Upload   
> Total   SpentLeft  Speed
> 2020-03-27T00:07:42.5498136Z 
> 2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
> --:--:-- --:--:-- --:--:-- 65.2M
> 2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.4282404Z Starting minikube ...
> 2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:43.7761742Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
> 2020-03-27T00:07:48.9031515Z Starting minikube ...
> 2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:49.0616688Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.1921479Z Starting minikube ...
> 2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:54.3395499Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4203902Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.4204895Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4217866Z Command: start_kubernetes_if_not_running failed. 
> 

[jira] [Updated] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-26 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-16821:
-
Priority: Blocker  (was: Critical)

> Run Kubernetes test failed with invalid named "minikube"
> 
>
> Key: FLINK-16821
> URL: https://issues.apache.org/jira/browse/FLINK-16821
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Zhijiang
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> This is the test run 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> Log output
> {code:java}
> 2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
> 2020-03-27T00:07:38.956Z 
> ==
> 2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
> 2020-03-27T00:07:41.7529865Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.7721475Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-27T00:07:42.5359301Z Installing minikube ...
> 2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
> TimeTime Time  Current
> 2020-03-27T00:07:42.5494729Z  Dload  Upload   
> Total   SpentLeft  Speed
> 2020-03-27T00:07:42.5498136Z 
> 2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
> --:--:-- --:--:-- --:--:-- 65.2M
> 2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.4282404Z Starting minikube ...
> 2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:43.7761742Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
> 2020-03-27T00:07:48.9031515Z Starting minikube ...
> 2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:49.0616688Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.1921479Z Starting minikube ...
> 2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:54.3395499Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4203902Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.4204895Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4217866Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:59.4235917Z Command: start_kubernetes_if_not_running failed 
> 3 times.
> 2020-03-27T00:07:59.4236459Z Could not start minikube. Aborting...
> 

[GitHub] [flink] shangwen commented on issue #11496: [FLINK-16743][table] Introduce datagen, print, blackhole connectors

2020-03-26 Thread GitBox
shangwen commented on issue #11496: [FLINK-16743][table] Introduce datagen, 
print, blackhole connectors
URL: https://github.com/apache/flink/pull/11496#issuecomment-604809306
 
 
   hi @JingsongLi , Could you please review this PR for me?  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r399029525
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java
 ##
 @@ -110,8 +116,9 @@
private String queryTemplate;
private int resultSetType;
private int resultSetConcurrency;
-   private RowTypeInfo rowTypeInfo;
+   private RowType rowType;
 
 Review comment:
   I haven't seen arguments to not use it in connectors. E.g. 
HiveTableInputFormat uses DataType. And it's much nicer for connector to just 
deal with user facing type system. What're your concerns?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] shangwen removed a comment on issue #11496: [FLINK-16743][table] Introduce datagen, print, blackhole connectors

2020-03-26 Thread GitBox
shangwen removed a comment on issue #11496: [FLINK-16743][table] Introduce 
datagen, print, blackhole connectors
URL: https://github.com/apache/flink/pull/11496#issuecomment-604809306
 
 
   hi @JingsongLi , Could you please review this PR for me?  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r399029128
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialects.java
 ##
 @@ -403,5 +413,38 @@ public int minTimestampPrecision() {
);
 
}
+
+   @Override
+   public void setRow(ResultSet resultSet, RowType rowType, Row 
reuse) throws SQLException {
+   for (int pos = 0; pos < rowType.getFieldCount(); pos++) 
{
+   LogicalType type = rowType.getTypeAt(pos);
+   Object v = resultSet.getObject(pos + 1);
+
+   if (type instanceof SmallIntType) {
+   reuse.setField(pos, ((Integer) 
v).shortValue());
+   } else if (type instanceof ArrayType) {
 
 Review comment:
   I don't know yet. My goal is to make Postgres data types fully work, so I 
didn't touch other dbs. It should be easy for other dbs to follow if they 
support array.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11524: [FLINK-16803][hive] Need to make sure partition inherit table spec wh…

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11524: [FLINK-16803][hive] Need to make 
sure partition inherit table spec wh…
URL: https://github.com/apache/flink/pull/11524#issuecomment-604351593
 
 
   
   ## CI report:
   
   * 985fa46da9bba98a0ddd34788edc34a404be321f Travis: 
[FAILURE](https://travis-ci.com/github/flink-ci/flink/builds/155454445) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6672)
 
   * 0430a4524f511b4d070cd79666f0fb58d26eb981 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11498: [FLINK-16741][WEB] add tm log list & tm log detail page

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11498: [FLINK-16741][WEB] add tm log list & 
tm log detail page
URL: https://github.com/apache/flink/pull/11498#issuecomment-603142183
 
 
   
   ## CI report:
   
   * 14e40cd9acef9f1328644eab91f8cf63a3e73775 Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6572)
 
   * e9da7e73bf36159aafa48bd229827bdd5d52b944 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r399028594
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java
 ##
 @@ -127,7 +134,7 @@ public JDBCInputFormat() {
 
@Override
public RowTypeInfo getProducedType() {
-   return rowTypeInfo;
+   return (RowTypeInfo) 
fromDataTypeToLegacyInfo(fromLogicalToDataType(rowType));
 
 Review comment:
   changed it because it's easier to check data types when converting a jdbc 
row to flink row.
   
   would you recommend revert it given I've changed 
JdbcTypeUtil.normalizeTableSchema() in the latest commit I pushed?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11540: [FLINK-16099] Translate "HiveCatalog" page of "Hive Integration" into…

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11540: [FLINK-16099] Translate 
"HiveCatalog" page of "Hive Integration" into…
URL: https://github.com/apache/flink/pull/11540#issuecomment-604804466
 
 
   
   ## CI report:
   
   * 31c00a13b2bb7f15b4fa594b5c9148f971769e86 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155736840) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6715)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-26 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068257#comment-17068257
 ] 

Zhijiang commented on FLINK-16821:
--

Another instance 
[https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6708=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]

> Run Kubernetes test failed with invalid named "minikube"
> 
>
> Key: FLINK-16821
> URL: https://issues.apache.org/jira/browse/FLINK-16821
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Zhijiang
>Priority: Major
>  Labels: test-stability
>
> This is the test run 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> Log output
> {code:java}
> 2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
> 2020-03-27T00:07:38.956Z 
> ==
> 2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
> 2020-03-27T00:07:41.7529865Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.7721475Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-27T00:07:42.5359301Z Installing minikube ...
> 2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
> TimeTime Time  Current
> 2020-03-27T00:07:42.5494729Z  Dload  Upload   
> Total   SpentLeft  Speed
> 2020-03-27T00:07:42.5498136Z 
> 2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
> --:--:-- --:--:-- --:--:-- 65.2M
> 2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.4282404Z Starting minikube ...
> 2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:43.7761742Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
> 2020-03-27T00:07:48.9031515Z Starting minikube ...
> 2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:49.0616688Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.1921479Z Starting minikube ...
> 2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:54.3395499Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4203902Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.4204895Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4217866Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:59.4235917Z 

[GitHub] [flink] bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
bowenli86 commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r399028768
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialect.java
 ##
 @@ -139,4 +145,23 @@ default String getSelectFromStatement(String tableName, 
String[] selectFields, S
return "SELECT " + selectExpressions + " FROM " +
quoteIdentifier(tableName) + 
(conditionFields.length > 0 ? " WHERE " + fieldExpressions : "");
}
+
+   /**
+* Set {@link Row} with data retrieved from {@link ResultSet} according 
to {@link RowType}.
+*
+* @param resultSet ResultSet from JDBC
+* @param rowType RowType of the row
+* @param reuse The row to set
+*/
+   default void setRow(ResultSet resultSet, RowType rowType, Row reuse) 
throws SQLException {
 
 Review comment:
   I've renamed that JIRA, I think a new API in JDBCDialect would be enough for 
the job


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-26 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-16821:
-
Fix Version/s: 1.11.0

> Run Kubernetes test failed with invalid named "minikube"
> 
>
> Key: FLINK-16821
> URL: https://issues.apache.org/jira/browse/FLINK-16821
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Zhijiang
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> This is the test run 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> Log output
> {code:java}
> 2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
> 2020-03-27T00:07:38.956Z 
> ==
> 2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
> 2020-03-27T00:07:41.7529865Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.7721475Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-27T00:07:42.5359301Z Installing minikube ...
> 2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
> TimeTime Time  Current
> 2020-03-27T00:07:42.5494729Z  Dload  Upload   
> Total   SpentLeft  Speed
> 2020-03-27T00:07:42.5498136Z 
> 2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
> --:--:-- --:--:-- --:--:-- 65.2M
> 2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.4282404Z Starting minikube ...
> 2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:43.7761742Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
> 2020-03-27T00:07:48.9031515Z Starting minikube ...
> 2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:49.0616688Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.1921479Z Starting minikube ...
> 2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:54.3395499Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4203902Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.4204895Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4217866Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:59.4235917Z Command: start_kubernetes_if_not_running failed 
> 3 times.
> 2020-03-27T00:07:59.4236459Z Could not start minikube. Aborting...
> 

[jira] [Updated] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-26 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang updated FLINK-16821:
-
Priority: Critical  (was: Major)

> Run Kubernetes test failed with invalid named "minikube"
> 
>
> Key: FLINK-16821
> URL: https://issues.apache.org/jira/browse/FLINK-16821
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Zhijiang
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> This is the test run 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> Log output
> {code:java}
> 2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
> 2020-03-27T00:07:38.956Z 
> ==
> 2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
> 2020-03-27T00:07:41.7529865Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.7721475Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-27T00:07:42.5359301Z Installing minikube ...
> 2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
> TimeTime Time  Current
> 2020-03-27T00:07:42.5494729Z  Dload  Upload   
> Total   SpentLeft  Speed
> 2020-03-27T00:07:42.5498136Z 
> 2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
> --:--:-- --:--:-- --:--:-- 65.2M
> 2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.4282404Z Starting minikube ...
> 2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:43.7761742Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
> 2020-03-27T00:07:48.9031515Z Starting minikube ...
> 2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:49.0616688Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.1921479Z Starting minikube ...
> 2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:54.3395499Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4203902Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.4204895Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4217866Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:59.4235917Z Command: start_kubernetes_if_not_running failed 
> 3 times.
> 2020-03-27T00:07:59.4236459Z Could not start minikube. Aborting...
> 

[GitHub] [flink] flinkbot commented on issue #11542: [FLINK-16303][rest] Enable retrieval of custom JobManager log files

2020-03-26 Thread GitBox
flinkbot commented on issue #11542: [FLINK-16303][rest] Enable retrieval of 
custom JobManager log files
URL: https://github.com/apache/flink/pull/11542#issuecomment-604808106
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 6c590310abbd4e8bedb23538a98e3db87adb82be (Fri Mar 27 
04:26:19 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove unnecessary zookeeper.connect property validation

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove 
unnecessary zookeeper.connect property validation
URL: https://github.com/apache/flink/pull/11425#issuecomment-599933380
 
 
   
   ## CI report:
   
   * b9f13d82a925af91d07dddf74f9a17cac987681e Travis: 
[CANCELED](https://travis-ci.com/github/flink-ci/flink/builds/155733885) 
   * 12419aadabd038edaf1cf2f705a02a4d1edbc3f7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-26 Thread Zhijiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068254#comment-17068254
 ] 

Zhijiang commented on FLINK-16821:
--

Another instance 
[https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6705=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]

> Run Kubernetes test failed with invalid named "minikube"
> 
>
> Key: FLINK-16821
> URL: https://issues.apache.org/jira/browse/FLINK-16821
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Zhijiang
>Priority: Major
>  Labels: test-stability
>
> This is the test run 
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> Log output
> {code:java}
> 2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
> 2020-03-27T00:07:38.956Z 
> ==
> 2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
> 2020-03-27T00:07:41.7529865Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.7721475Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> 2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
> 2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
> 2020-03-27T00:07:42.5359301Z Installing minikube ...
> 2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
> TimeTime Time  Current
> 2020-03-27T00:07:42.5494729Z  Dload  Upload   
> Total   SpentLeft  Speed
> 2020-03-27T00:07:42.5498136Z 
> 2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
> --:--:-- --:--:-- --:--:-- 0
> 2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
> --:--:-- --:--:-- --:--:-- 65.2M
> 2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.4282404Z Starting minikube ...
> 2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:43.7761742Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
> 2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
> 2020-03-27T00:07:48.9031515Z Starting minikube ...
> 2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:49.0616688Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
> 2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.1921479Z Starting minikube ...
> 2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
> 2020-03-27T00:07:54.3395499Z * Using the none driver based on user 
> configuration
> 2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
> installed for kubernetes version 1.18.0
> 2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4203902Z * There is no local cluster named "minikube"
> 2020-03-27T00:07:54.4204895Z   - To fix this, run: minikube start
> 2020-03-27T00:07:54.4217866Z Command: start_kubernetes_if_not_running failed. 
> Retrying...
> 2020-03-27T00:07:59.4235917Z 

[GitHub] [flink] flinkbot edited a comment on issue #11158: [FLINK-16070] [table-planner-blink] blink stream planner supports remove constant keys from an aggregate

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11158: [FLINK-16070] [table-planner-blink] 
blink stream planner supports remove constant keys from an aggregate
URL: https://github.com/apache/flink/pull/11158#issuecomment-589009467
 
 
   
   ## CI report:
   
   * 7a4635a9699b7b524265c41ccb0ddedf5dd30e0e Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155728830) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6710)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16303) add log list and read log by name for jobmanager

2020-03-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16303:
---
Labels: pull-request-available  (was: )

> add log list and read log by name for jobmanager
> 
>
> Key: FLINK-16303
> URL: https://issues.apache.org/jira/browse/FLINK-16303
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Reporter: lining
>Assignee: lining
>Priority: Major
>  Labels: pull-request-available
>
> * list jobmanager all log file
>  ** /jobmanager/logs
>  ** 
> {code:java}
> {
>   "logs": [
> {
>   "name": "jobmanager.log",
>   "size": 12529
> }
>   ]
> }{code}
>  * read jobmanager log file
>  **  /jobmanager/log/[filename]
>  ** response: same as jobmanager's log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] jinglining opened a new pull request #11542: [FLINK-16303][rest] Enable retrieval of custom JobManager log files

2020-03-26 Thread GitBox
jinglining opened a new pull request #11542: [FLINK-16303][rest] Enable 
retrieval of custom JobManager log files
URL: https://github.com/apache/flink/pull/11542
 
 
   ## What is the purpose of the change
   
   This pull request makes rest API could get log list and get the log by 
custom name for jobmanager.
   
   
   ## Brief change log
   
   - get log list
   - get the log by name
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - Added JobManagerLogListHandlerTest that verfied JobManagerLogListHandler.
   - Added WebFrontendITCase.getCustomLogFiles could verfied 
JobManagerCustomLogHandler.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no )
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11541: [FLINK-15416][network] add task manager netty client retry mechenism

2020-03-26 Thread GitBox
flinkbot commented on issue #11541: [FLINK-15416][network] add task manager 
netty client retry mechenism
URL: https://github.com/apache/flink/pull/11541#issuecomment-604807682
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3bc439b7c92f5e764033133845ac1ff2a2b14b91 (Fri Mar 27 
04:24:31 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15416).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-15416) Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel

2020-03-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15416:
---
Labels: pull-request-available  (was: )

> Add Retry Mechanism for PartitionRequestClientFactory.ConnectingChannel
> ---
>
> Key: FLINK-15416
> URL: https://issues.apache.org/jira/browse/FLINK-15416
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
>
> We run a flink with 256 TMs in production. The job internally has keyby 
> logic. Thus, it builds a 256 * 256 communication channels. An outage happened 
> when there is a chip internal link of one of the network switchs broken that 
> connecting these machines. During the outage, the flink can't restart 
> successfully as there is always an exception like  "Connecting the channel 
> failed: Connecting to remote task manager + '/10.14.139.6:41300' has 
> failed. This might indicate that the remote task manager has been lost. 
> After deep investigation with the network infrastructure team, we found there 
> are 6 switchs connecting with these machines. Each switch has 32 physcal 
> links. Every socket is round-robin assigned to each of links for load 
> balances. Thus, there is always average 256 * 256 / 6 * 32  * 2 = 170 
> channels will be assigned to the broken link. The issue lasted for 4 hours 
> until we found the broken link and restart the problematic switch. 
> Given this, we found that the retry of creating channel will help to resolve 
> this issue. For our networking topology, we can set retry to 2. As 170 / (132 
> * 132) < 1, which means after retry twice no channel in 170 channels will be 
> assigned to the broken link in the average case.
> I think it is valuable fix for this kind of partial network partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16821) Run Kubernetes test failed with invalid named "minikube"

2020-03-26 Thread Zhijiang (Jira)
Zhijiang created FLINK-16821:


 Summary: Run Kubernetes test failed with invalid named "minikube"
 Key: FLINK-16821
 URL: https://issues.apache.org/jira/browse/FLINK-16821
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes, Tests
Reporter: Zhijiang


This is the test run 
[https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6702=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]

Log output
{code:java}
2020-03-27T00:07:38.9666021Z Running 'Run Kubernetes test'
2020-03-27T00:07:38.956Z 
==
2020-03-27T00:07:38.9677101Z TEST_DATA_DIR: 
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-38967103614
2020-03-27T00:07:41.7529865Z Flink dist directory: 
/home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
2020-03-27T00:07:41.7721475Z Flink dist directory: 
/home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
2020-03-27T00:07:41.8208394Z Docker version 19.03.8, build afacb8b7f0
2020-03-27T00:07:42.4793914Z docker-compose version 1.25.4, build 8d51620a
2020-03-27T00:07:42.5359301Z Installing minikube ...
2020-03-27T00:07:42.5494076Z   % Total% Received % Xferd  Average Speed   
TimeTime Time  Current
2020-03-27T00:07:42.5494729Z  Dload  Upload   
Total   SpentLeft  Speed
2020-03-27T00:07:42.5498136Z 
2020-03-27T00:07:42.6214887Z   0 00 00 0  0  0 
--:--:-- --:--:-- --:--:-- 0
2020-03-27T00:07:43.3467750Z   0 00 00 0  0  0 
--:--:-- --:--:-- --:--:-- 0
2020-03-27T00:07:43.3469636Z 100 52.0M  100 52.0M0 0  65.2M  0 
--:--:-- --:--:-- --:--:-- 65.2M
2020-03-27T00:07:43.4262625Z * There is no local cluster named "minikube"
2020-03-27T00:07:43.4264438Z   - To fix this, run: minikube start
2020-03-27T00:07:43.4282404Z Starting minikube ...
2020-03-27T00:07:43.7749694Z * minikube v1.9.0 on Ubuntu 16.04
2020-03-27T00:07:43.7761742Z * Using the none driver based on user configuration
2020-03-27T00:07:43.7762229Z X The none driver requires conntrack to be 
installed for kubernetes version 1.18.0
2020-03-27T00:07:43.8202161Z * There is no local cluster named "minikube"
2020-03-27T00:07:43.8203353Z   - To fix this, run: minikube start
2020-03-27T00:07:43.8568899Z * There is no local cluster named "minikube"
2020-03-27T00:07:43.8570685Z   - To fix this, run: minikube start
2020-03-27T00:07:43.8583793Z Command: start_kubernetes_if_not_running failed. 
Retrying...
2020-03-27T00:07:48.9017252Z * There is no local cluster named "minikube"
2020-03-27T00:07:48.9019347Z   - To fix this, run: minikube start
2020-03-27T00:07:48.9031515Z Starting minikube ...
2020-03-27T00:07:49.0612601Z * minikube v1.9.0 on Ubuntu 16.04
2020-03-27T00:07:49.0616688Z * Using the none driver based on user configuration
2020-03-27T00:07:49.0620173Z X The none driver requires conntrack to be 
installed for kubernetes version 1.18.0
2020-03-27T00:07:49.1040676Z * There is no local cluster named "minikube"
2020-03-27T00:07:49.1042353Z   - To fix this, run: minikube start
2020-03-27T00:07:49.1453522Z * There is no local cluster named "minikube"
2020-03-27T00:07:49.1454594Z   - To fix this, run: minikube start
2020-03-27T00:07:49.1468436Z Command: start_kubernetes_if_not_running failed. 
Retrying...
2020-03-27T00:07:54.1907713Z * There is no local cluster named "minikube"
2020-03-27T00:07:54.1909876Z   - To fix this, run: minikube start
2020-03-27T00:07:54.1921479Z Starting minikube ...
2020-03-27T00:07:54.3388738Z * minikube v1.9.0 on Ubuntu 16.04
2020-03-27T00:07:54.3395499Z * Using the none driver based on user configuration
2020-03-27T00:07:54.3396443Z X The none driver requires conntrack to be 
installed for kubernetes version 1.18.0
2020-03-27T00:07:54.3824399Z * There is no local cluster named "minikube"
2020-03-27T00:07:54.3837652Z   - To fix this, run: minikube start
2020-03-27T00:07:54.4203902Z * There is no local cluster named "minikube"
2020-03-27T00:07:54.4204895Z   - To fix this, run: minikube start
2020-03-27T00:07:54.4217866Z Command: start_kubernetes_if_not_running failed. 
Retrying...
2020-03-27T00:07:59.4235917Z Command: start_kubernetes_if_not_running failed 3 
times.
2020-03-27T00:07:59.4236459Z Could not start minikube. Aborting...
2020-03-27T00:07:59.8439850Z The connection to the server localhost:8080 was 
refused - did you specify the right host or port?
2020-03-27T00:07:59.8939088Z The connection to the server localhost:8080 was 
refused - did you specify the right host or port?
2020-03-27T00:07:59.9515679Z The connection to the server localhost:8080 was 
refused - did you specify the right host or port?
2020-03-27T00:07:59.9528463Z Stopping minikube ...
2020-03-27T00:07:59.9921558Z 

[GitHub] [flink] HuangZhenQiu opened a new pull request #11541: [FLINK-15416][network] add task manager netty client retry mechenism

2020-03-26 Thread GitBox
HuangZhenQiu opened a new pull request #11541: [FLINK-15416][network] add task 
manager netty client retry mechenism
URL: https://github.com/apache/flink/pull/11541
 
 
   ## What is the purpose of the change
   Add retry logic for netty client creation in PartitionRequestClientFactory. 
It is useful to make flink runtime tolerant physical link issue in the switch. 
   
   ## Brief changelog
   
 - Add a netty client retry config
 - Add retry logic in PartitionRequestClientFactory for building new 
channel.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] vthinkxie commented on a change in pull request #11436: [FLINK-11404][web] add load more feature in exception page

2020-03-26 Thread GitBox
vthinkxie commented on a change in pull request #11436: [FLINK-11404][web] add 
load more feature in exception page
URL: https://github.com/apache/flink/pull/11436#discussion_r399027098
 
 

 ##
 File path: 
flink-runtime-web/web-dashboard/src/app/pages/job/exceptions/job-exceptions.component.ts
 ##
 @@ -31,28 +31,45 @@ import { JobService } from 'services';
 export class JobExceptionsComponent implements OnInit {
   rootException = '';
   listOfException: JobExceptionItemInterface[] = [];
+  truncated = false;
+  isLoading = false;
+  maxExceptions = 0;
 
   trackExceptionBy(_: number, node: JobExceptionItemInterface) {
 return node.timestamp;
   }
-
-  constructor(private jobService: JobService, private cdr: ChangeDetectorRef) 
{}
-
-  ngOnInit() {
+  loadMore() {
+this.isLoading = true;
+this.maxExceptions += 10;
 this.jobService.jobDetail$
   .pipe(
 distinctUntilChanged((pre, next) => pre.jid === next.jid),
-flatMap(job => this.jobService.loadExceptions(job.jid))
+flatMap(job => this.jobService.loadExceptions(job.jid, 
this.maxExceptions))
   )
-  .subscribe(data => {
-// @ts-ignore
-if (data['root-exception']) {
-  this.rootException = formatDate(data.timestamp, '-MM-dd 
HH:mm:ss', 'en') + '\n' + data['root-exception'];
-} else {
-  this.rootException = 'No Root Exception';
+  .subscribe(
+data => {
+  // @ts-ignore
+  if (data['root-exception']) {
+this.rootException =
+  formatDate(data.timestamp, '-MM-dd HH:mm:ss', 'en') + '\n' + 
data['root-exception'];
+  } else {
+this.rootException = 'No Root Exception';
+  }
+  this.truncated = data.truncated;
+  this.listOfException = data['all-exceptions'];
+  this.isLoading = false;
 
 Review comment:
   thanks for your advice, use `tap` instead of `finalize` since the source 
never complete.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16820) support reading array of timestamp, data, and time in JDBCTableSource

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16820:


 Summary: support reading array of timestamp, data, and time in 
JDBCTableSource
 Key: FLINK-16820
 URL: https://issues.apache.org/jira/browse/FLINK-16820
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11540: [FLINK-16099] Translate "HiveCatalog" page of "Hive Integration" into…

2020-03-26 Thread GitBox
flinkbot commented on issue #11540: [FLINK-16099] Translate "HiveCatalog" page 
of "Hive Integration" into…
URL: https://github.com/apache/flink/pull/11540#issuecomment-604804466
 
 
   
   ## CI report:
   
   * 31c00a13b2bb7f15b4fa594b5c9148f971769e86 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] vthinkxie commented on issue #11498: [FLINK-16741][WEB] add tm log list & tm log detail page

2020-03-26 Thread GitBox
vthinkxie commented on issue #11498: [FLINK-16741][WEB] add tm log list & tm 
log detail page
URL: https://github.com/apache/flink/pull/11498#issuecomment-604804589
 
 
   
![image](https://user-images.githubusercontent.com/1506722/77720608-604c4980-7023-11ea-9f62-a15e98e62dd6.png)
   Hi @GJL 
   I added a compact mode to the `flink-refresh-download` component, it would 
fix the style error in job manager. 
   The UI of jobmanager's log will be same as taskmanager after 
https://issues.apache.org/jira/browse/FLINK-16303


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11539: [FLINK-16800][table-common] Deal with nested types in TypeMappingUtils#checkIfCompatible

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11539: [FLINK-16800][table-common] Deal 
with nested types in TypeMappingUtils#checkIfCompatible
URL: https://github.com/apache/flink/pull/11539#issuecomment-604788215
 
 
   
   ## CI report:
   
   * bfaf6ddb417806aba9b586dee0b3197d3fb50e7b Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155731846) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6711)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16819) Got KryoException while using UDAF in flink1.9

2020-03-26 Thread Xingxing Di (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingxing Di updated FLINK-16819:

Description: 
Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to flink1.9 
, most jobs works fine, but some jobs got  KryoExceptions. 

We found that UDAF will trigger this exception, btw ,we are using blink planner.

*Here is the full stack traces:*
 2020-03-27 11:46:55
 com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: 
Index: 104, Size: 2
 Serialization trace:
 seed (java.util.Random)
 gen (com.tdunning.math.stats.AVLTreeDigest)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
 at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
 at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
 at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
 at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
 at 
org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:536)
 at 
org.apache.flink.table.dataformat.BinaryGeneric.getJavaObjectFromBinaryGeneric(BinaryGeneric.java:86)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:628)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:633)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:320)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1293)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1257)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:302)
 at GroupAggsHandler$71.setAccumulators(Unknown Source)
 at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
 at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
 at 
org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
 at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
 at java.util.ArrayList.rangeCheck(ArrayList.java:657)
 at java.util.ArrayList.get(ArrayList.java:433)
 at 
com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
 at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
 at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:677)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 ... 26 more

  was:
Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to flink1.9 
, most jobs works fine, but some jobs got  KryoExceptions. 

We found that UDAF will trigger this exception, btw ,we are using blink planner.

Here is the full stack trace:

```
 2020-03-27 11:46:55
 com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: 
Index: 104, Size: 2
 Serialization trace:
 seed (java.util.Random)
 gen (com.tdunning.math.stats.AVLTreeDigest)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
 at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
 at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
 at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
 at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
 at 

[GitHub] [flink] lirui-apache commented on issue #11524: [FLINK-16803][hive] Need to make sure partition inherit table spec wh…

2020-03-26 Thread GitBox
lirui-apache commented on issue #11524: [FLINK-16803][hive] Need to make sure 
partition inherit table spec wh…
URL: https://github.com/apache/flink/pull/11524#issuecomment-604804335
 
 
   cc @JingsongLi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11158: [FLINK-16070] [table-planner-blink] blink stream planner supports remove constant keys from an aggregate

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11158: [FLINK-16070] [table-planner-blink] 
blink stream planner supports remove constant keys from an aggregate
URL: https://github.com/apache/flink/pull/11158#issuecomment-589009467
 
 
   
   ## CI report:
   
   * 7a4635a9699b7b524265c41ccb0ddedf5dd30e0e Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155728830) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6710)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read Hive table with RegexSerDe

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read 
Hive table with RegexSerDe
URL: https://github.com/apache/flink/pull/11504#issuecomment-603693689
 
 
   
   ## CI report:
   
   * e12b582ecf20820f5a79e71190785788cd99d552 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155029566) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6610)
 
   * e670931736b229bf5477d19fe2905f458ae236a6 Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155735228) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6714)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove unnecessary zookeeper.connect property validation

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove 
unnecessary zookeeper.connect property validation
URL: https://github.com/apache/flink/pull/11425#issuecomment-599933380
 
 
   
   ## CI report:
   
   * 296e62e22eb9cff454f1c6756cbd796ec20fbde2 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155453985) 
   * b9f13d82a925af91d07dddf74f9a17cac987681e Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155733885) 
   * 12419aadabd038edaf1cf2f705a02a4d1edbc3f7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16819) Got KryoException while using UDAF in flink1.9

2020-03-26 Thread Xingxing Di (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingxing Di updated FLINK-16819:

Description: 
Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to flink1.9 
, most jobs works fine, but some jobs got  KryoExceptions. 

We found that UDAF will trigger this exception, btw ,we are using blink planner.

Here is the full stack trace:

```
 2020-03-27 11:46:55
 com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: 
Index: 104, Size: 2
 Serialization trace:
 seed (java.util.Random)
 gen (com.tdunning.math.stats.AVLTreeDigest)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
 at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
 at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
 at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
 at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
 at 
org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:536)
 at 
org.apache.flink.table.dataformat.BinaryGeneric.getJavaObjectFromBinaryGeneric(BinaryGeneric.java:86)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:628)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:633)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:320)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1293)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1257)
 at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:302)
 at GroupAggsHandler$71.setAccumulators(Unknown Source)
 at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
 at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
 at 
org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
 at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
 at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
 at java.util.ArrayList.rangeCheck(ArrayList.java:657)
 at java.util.ArrayList.get(ArrayList.java:433)
 at 
com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
 at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
 at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:677)
 at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
 ... 26 more
  ```

  was:
Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to flink1.9 
, most jobs works fine, but some jobs got  KryoExceptions. 

We found that UDAF will trigger this exception, btw ,we are using blink planner.

Here is the full stack trace:
2020-03-27 11:46:55
com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: 
Index: 104, Size: 2
Serialization trace:
seed (java.util.Random)
gen (com.tdunning.math.stats.AVLTreeDigest)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
at 

[jira] [Created] (FLINK-16819) Got KryoException while using UDAF in flink1.9

2020-03-26 Thread Xingxing Di (Jira)
Xingxing Di created FLINK-16819:
---

 Summary: Got KryoException while using UDAF in flink1.9
 Key: FLINK-16819
 URL: https://issues.apache.org/jira/browse/FLINK-16819
 Project: Flink
  Issue Type: Bug
  Components: API / Type Serialization System, Table SQL / Planner
Affects Versions: 1.9.1
 Environment: Flink1.9.1

Apache hadoop 2.7.2
Reporter: Xingxing Di


Recently,  we are trying to upgrade online *sql jobs* from flink1.7 to flink1.9 
, most jobs works fine, but some jobs got  KryoExceptions. 

We found that UDAF will trigger this exception, btw ,we are using blink planner.

Here is the full stack trace:
2020-03-27 11:46:55
com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: 
Index: 104, Size: 2
Serialization trace:
seed (java.util.Random)
gen (com.tdunning.math.stats.AVLTreeDigest)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
at 
com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.deserialize(KryoSerializer.java:346)
at 
org.apache.flink.util.InstantiationUtil.deserializeFromByteArray(InstantiationUtil.java:536)
at 
org.apache.flink.table.dataformat.BinaryGeneric.getJavaObjectFromBinaryGeneric(BinaryGeneric.java:86)
at 
org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:628)
at 
org.apache.flink.table.dataformat.DataFormatConverters$GenericConverter.toExternalImpl(DataFormatConverters.java:633)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:320)
at 
org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1293)
at 
org.apache.flink.table.dataformat.DataFormatConverters$PojoConverter.toExternalImpl(DataFormatConverters.java:1257)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toExternal(DataFormatConverters.java:302)
at GroupAggsHandler$71.setAccumulators(Unknown Source)
at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:151)
at 
org.apache.flink.table.runtime.operators.aggregate.GroupAggFunction.processElement(GroupAggFunction.java:43)
at 
org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:85)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processElement(StreamOneInputProcessor.java:164)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:143)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:279)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IndexOutOfBoundsException: Index: 104, Size: 2
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at 
com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:677)
at 
com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
... 26 more
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16709) add a set command to set job name when submit job on sql client

2020-03-26 Thread Jun Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068249#comment-17068249
 ] 

Jun Zhang commented on FLINK-16709:
---

hi,[~jark] ,I am Sorry I should first discuss with the community. In fact, many 
systems, such as hive and impala, many variables in the session are shared by 
the jobs in the session. If we want to set different Configuration for 
different jobs , we need to modify the parameters before submit the job.

[~godfreyhe] thanks for  your  suggestion

> add a set command to set job name when submit job on sql client
> ---
>
> Key: FLINK-16709
> URL: https://issues.apache.org/jira/browse/FLINK-16709
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
>Reporter: Jun Zhang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When we submit a sql job in the sql client, the default job name is sessionid 
> + sql, and the job name cannot be specified, but when the sql is very long, 
> for example, I have 100 columns, this will be unfriendly to display on the 
> web UI ,when there are many jobs, it is not easy to find job. So we add a 
> command 'set execution.job-name = jobname' which can set the job name of the 
> submitted job



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on a change in pull request #10487: [FLINK-15100][connector/source] Add a base implementation for SourceReader.

2020-03-26 Thread GitBox
becketqin commented on a change in pull request #10487: 
[FLINK-15100][connector/source] Add a base implementation for SourceReader.
URL: https://github.com/apache/flink/pull/10487#discussion_r399021138
 
 

 ##
 File path: 
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/reader/synchronization/FutureNotifier.java
 ##
 @@ -0,0 +1,66 @@
+/*
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements.  See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership.  The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License.  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+ */
+
+package org.apache.flink.connector.base.source.reader.synchronization;
+
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.atomic.AtomicReference;
+
+/**
+ * A class facilitating the asynchronous communication among threads.
+ */
+public class FutureNotifier {
+   /** A future reference. */
+   private final AtomicReference> futureRef;
+
+   public FutureNotifier() {
+   this.futureRef = new AtomicReference<>(null);
+   }
+
+   /**
+* Get the future out of this notifier. The future will be completed 
when someone invokes
+* {@link #notifyComplete()}. If there is already an uncompleted 
future, that existing
+* future will be returned instead of a new one.
+*
+* @return a future that will be completed when {@link 
#notifyComplete()} is invoked.
+*/
+   public CompletableFuture future() {
+   CompletableFuture prevFuture = futureRef.get();
+   if (prevFuture != null) {
+   // Someone has created a future for us, don't create a 
new one.
+   return prevFuture;
+   } else {
+   CompletableFuture newFuture = new 
CompletableFuture<>();
+   boolean newFutureSet = futureRef.compareAndSet(null, 
newFuture);
+   // If someone created a future after our previous 
check, use that future.
+   // Otherwise, use the new future.
+   return newFutureSet ? newFuture : prevFuture;
 
 Review comment:
   Good catch! Actually it should return `future()` instead. Because even if 
the `compareAndSet()` failed, `futureRef.get()` may still return a null. So we 
need to call `future()` again to ensure correct check is made.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11540: [FLINK-16099] Translate "HiveCatalog" page of "Hive Integration" into…

2020-03-26 Thread GitBox
flinkbot commented on issue #11540: [FLINK-16099] Translate "HiveCatalog" page 
of "Hive Integration" into…
URL: https://github.com/apache/flink/pull/11540#issuecomment-604800038
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 31c00a13b2bb7f15b4fa594b5c9148f971769e86 (Fri Mar 27 
03:46:43 UTC 2020)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16099).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read Hive table with RegexSerDe

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11504: [FLINK-16767][hive] Failed to read 
Hive table with RegexSerDe
URL: https://github.com/apache/flink/pull/11504#issuecomment-603693689
 
 
   
   ## CI report:
   
   * e12b582ecf20820f5a79e71190785788cd99d552 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155029566) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6610)
 
   * e670931736b229bf5477d19fe2905f458ae236a6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16099) Translate "HiveCatalog" page of "Hive Integration" into Chinese

2020-03-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16099:
---
Labels: pull-request-available  (was: )

> Translate "HiveCatalog" page of "Hive Integration" into Chinese 
> 
>
> Key: FLINK-16099
> URL: https://issues.apache.org/jira/browse/FLINK-16099
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/hive/hive_catalog.html
> The markdown file is located in 
> {{flink/docs/dev/table/hive/hive_catalog.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove unnecessary zookeeper.connect property validation

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove 
unnecessary zookeeper.connect property validation
URL: https://github.com/apache/flink/pull/11425#issuecomment-599933380
 
 
   
   ## CI report:
   
   * 296e62e22eb9cff454f1c6756cbd796ec20fbde2 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155453985) 
   * b9f13d82a925af91d07dddf74f9a17cac987681e Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155733885) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] gaofc opened a new pull request #11540: [FLINK-16099] Translate "HiveCatalog" page of "Hive Integration" into…

2020-03-26 Thread GitBox
gaofc opened a new pull request #11540: [FLINK-16099] Translate "HiveCatalog" 
page of "Hive Integration" into…
URL: https://github.com/apache/flink/pull/11540
 
 
   … Chinese
   
   [FLINK-16099] Translate "HiveCatalog" page of "Hive Integration" into Chinese
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
docete commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r399018226
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java
 ##
 @@ -110,8 +116,9 @@
private String queryTemplate;
private int resultSetType;
private int resultSetConcurrency;
-   private RowTypeInfo rowTypeInfo;
+   private RowType rowType;
 
 Review comment:
   InputFormat is a runtime concept, should we introduce the table ecosystem 
concept RowType in ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r398997590
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialect.java
 ##
 @@ -139,4 +145,23 @@ default String getSelectFromStatement(String tableName, 
String[] selectFields, S
return "SELECT " + selectExpressions + " FROM " +
quoteIdentifier(tableName) + 
(conditionFields.length > 0 ? " WHERE " + fieldExpressions : "");
}
+
+   /**
+* Set {@link Row} with data retrieved from {@link ResultSet} according 
to {@link RowType}.
+*
+* @param resultSet ResultSet from JDBC
+* @param rowType RowType of the row
+* @param reuse The row to set
+*/
+   default void setRow(ResultSet resultSet, RowType rowType, Row reuse) 
throws SQLException {
+   for (int pos = 0; pos < rowType.getFieldCount(); pos++) {
+   LogicalType type = rowType.getTypeAt(pos);
+   Object v = resultSet.getObject(pos + 1);
+   if (type instanceof SmallIntType) {
 
 Review comment:
   We should use `LogicalTypeRoot` to check types instead of `instanceOf` for 
better performance. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r399001234
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialect.java
 ##
 @@ -139,4 +145,23 @@ default String getSelectFromStatement(String tableName, 
String[] selectFields, S
return "SELECT " + selectExpressions + " FROM " +
quoteIdentifier(tableName) + 
(conditionFields.length > 0 ? " WHERE " + fieldExpressions : "");
}
+
+   /**
+* Set {@link Row} with data retrieved from {@link ResultSet} according 
to {@link RowType}.
+*
+* @param resultSet ResultSet from JDBC
+* @param rowType RowType of the row
+* @param reuse The row to set
+*/
+   default void setRow(ResultSet resultSet, RowType rowType, Row reuse) 
throws SQLException {
 
 Review comment:
   I saw you created an issue FLINK-16811 to introduce `JDBCRowConverter`. 
Could we introduce it in this PR directly? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r399015861
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/dialect/JDBCDialects.java
 ##
 @@ -403,5 +413,38 @@ public int minTimestampPrecision() {
);
 
}
+
+   @Override
+   public void setRow(ResultSet resultSet, RowType rowType, Row 
reuse) throws SQLException {
+   for (int pos = 0; pos < rowType.getFieldCount(); pos++) 
{
+   LogicalType type = rowType.getTypeAt(pos);
+   Object v = resultSet.getObject(pos + 1);
+
+   if (type instanceof SmallIntType) {
+   reuse.setField(pos, ((Integer) 
v).shortValue());
+   } else if (type instanceof ArrayType) {
 
 Review comment:
   IIUC, currently, only Postgres supports `ArrayType`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc] JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread GitBox
wuchong commented on a change in pull request #11538: [FLINK-16813][jdbc]  
JDBCInputFormat doesn't correctly map Short
URL: https://github.com/apache/flink/pull/11538#discussion_r398997030
 
 

 ##
 File path: 
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java
 ##
 @@ -127,7 +134,7 @@ public JDBCInputFormat() {
 
@Override
public RowTypeInfo getProducedType() {
-   return rowTypeInfo;
+   return (RowTypeInfo) 
fromDataTypeToLegacyInfo(fromLogicalToDataType(rowType));
 
 Review comment:
   This will lost original conversion class. For example, JDBC are using 
`java.sql.Timstamp` as produced type, however, it will use `LocalDateTime` as 
produced type after this change. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove unnecessary zookeeper.connect property validation

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11425: [FLINK-16125][kafka] Remove 
unnecessary zookeeper.connect property validation
URL: https://github.com/apache/flink/pull/11425#issuecomment-599933380
 
 
   
   ## CI report:
   
   * 296e62e22eb9cff454f1c6756cbd796ec20fbde2 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/155453985) 
   * b9f13d82a925af91d07dddf74f9a17cac987681e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16794) ClassNotFoundException caused by ClassLoader.getSystemClassLoader using impertinently

2020-03-26 Thread victor.jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

victor.jiang updated FLINK-16794:
-
Description: 
In some containerization environment,the context classloader is not the 
SystemClassLoader,it uses the customized classloader usually for the classes 
isolation ,so the ClassNotFoundException may be caused。recommends using 
getClass/Caller/ThreadCurrentContext 's ClassLoader。

The related sources below:

1.flink-clients\src\main\java\org\apache\flink\client\program\ClusterClient.java"(690,33):
 return getAccumulators(jobID, ClassLoader.getSystemClassLoader());
 
2.flink-clients\src\main\java\org\apache\flink\client\program\MiniClusterClient.java"(148,33):
 return getAccumulators(jobID, ClassLoader.getSystemClassLoader());
 
3.flink-runtime\src\main\java\org\apache\flink\runtime\blob\BlobUtils.java"(348,66):
 return (Throwable) InstantiationUtil.deserializeObject(bytes, 
ClassLoader.getSystemClassLoader());
 
4.flink-runtime\src\main\java\org\apache\flink\runtime\rest\messages\json\SerializedThrowableDeserializer.java"(52,68):
 return InstantiationUtil.deserializeObject(serializedException, 
ClassLoader.getSystemClassLoader());
 
5.flink-runtime\src\main\java\org\apache\flink\runtime\rpc\messages\RemoteRpcInvocation.java"(118,67):
 methodInvocation = 
serializedMethodInvocation.deserializeValue(ClassLoader.getSystemClassLoader());

  was:
In same containerization environment,the context classloader is not the 
SystemClassLoader,it uses the customized classloader usually for the classes 
isolation ,so the ClassNotFoundException may be caused。recommends using 
getClass/Caller/ThreadCurrentContext 's ClassLoader。

The related sources below:

1.flink-clients\src\main\java\org\apache\flink\client\program\ClusterClient.java"(690,33):
 return getAccumulators(jobID, ClassLoader.getSystemClassLoader());
2.flink-clients\src\main\java\org\apache\flink\client\program\MiniClusterClient.java"(148,33):
 return getAccumulators(jobID, ClassLoader.getSystemClassLoader());
3.flink-runtime\src\main\java\org\apache\flink\runtime\blob\BlobUtils.java"(348,66):
 return (Throwable) InstantiationUtil.deserializeObject(bytes, 
ClassLoader.getSystemClassLoader());
4.flink-runtime\src\main\java\org\apache\flink\runtime\rest\messages\json\SerializedThrowableDeserializer.java"(52,68):
 return InstantiationUtil.deserializeObject(serializedException, 
ClassLoader.getSystemClassLoader());
5.flink-runtime\src\main\java\org\apache\flink\runtime\rpc\messages\RemoteRpcInvocation.java"(118,67):
 methodInvocation = 
serializedMethodInvocation.deserializeValue(ClassLoader.getSystemClassLoader());


> ClassNotFoundException caused by ClassLoader.getSystemClassLoader using 
> impertinently  
> ---
>
> Key: FLINK-16794
> URL: https://issues.apache.org/jira/browse/FLINK-16794
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission, Runtime / REST
>Affects Versions: 1.8.0, 1.8.1, 1.8.2, 1.8.3
>Reporter: victor.jiang
>Priority: Major
>
> In some containerization environment,the context classloader is not the 
> SystemClassLoader,it uses the customized classloader usually for the classes 
> isolation ,so the ClassNotFoundException may be caused。recommends using 
> getClass/Caller/ThreadCurrentContext 's ClassLoader。
> The related sources below:
> 1.flink-clients\src\main\java\org\apache\flink\client\program\ClusterClient.java"(690,33):
>  return getAccumulators(jobID, ClassLoader.getSystemClassLoader());
>  
> 2.flink-clients\src\main\java\org\apache\flink\client\program\MiniClusterClient.java"(148,33):
>  return getAccumulators(jobID, ClassLoader.getSystemClassLoader());
>  
> 3.flink-runtime\src\main\java\org\apache\flink\runtime\blob\BlobUtils.java"(348,66):
>  return (Throwable) InstantiationUtil.deserializeObject(bytes, 
> ClassLoader.getSystemClassLoader());
>  
> 4.flink-runtime\src\main\java\org\apache\flink\runtime\rest\messages\json\SerializedThrowableDeserializer.java"(52,68):
>  return InstantiationUtil.deserializeObject(serializedException, 
> ClassLoader.getSystemClassLoader());
>  
> 5.flink-runtime\src\main\java\org\apache\flink\runtime\rpc\messages\RemoteRpcInvocation.java"(118,67):
>  methodInvocation = 
> serializedMethodInvocation.deserializeValue(ClassLoader.getSystemClassLoader());



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11539: [FLINK-16800][table-common] Deal with nested types in TypeMappingUtils#checkIfCompatible

2020-03-26 Thread GitBox
flinkbot edited a comment on issue #11539: [FLINK-16800][table-common] Deal 
with nested types in TypeMappingUtils#checkIfCompatible
URL: https://github.com/apache/flink/pull/11539#issuecomment-604788215
 
 
   
   ## CI report:
   
   * bfaf6ddb417806aba9b586dee0b3197d3fb50e7b Travis: 
[PENDING](https://travis-ci.com/github/flink-ci/flink/builds/155731846) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=6711)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi commented on a change in pull request #11511: [FLINK-16771][table-planner-blink] NPE when filtering by decimal column

2020-03-26 Thread GitBox
JingsongLi commented on a change in pull request #11511: 
[FLINK-16771][table-planner-blink] NPE when filtering by decimal column
URL: https://github.com/apache/flink/pull/11511#discussion_r399011271
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -346,7 +346,11 @@ object GenerateUtils {
 ctx.addReusableMember(fieldDecimal)
 val value = Decimal.fromBigDecimal(
   literalValue.asInstanceOf[JBigDecimal], precision, scale)
-generateNonNullLiteral(literalType, fieldTerm, value)
+if (value == null) {
 
 Review comment:
   So we should improve `generateLiteral`, and I think maybe this is risk in it 
too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-16647) Miss file extension when inserting to hive table with compression

2020-03-26 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee resolved FLINK-16647.
--
Resolution: Fixed

release-1.10:

d33c0eb7e880f91774dbed263ffcaf9c83b23e13

fd9d68f2c6e32894f6ac846f899c466b3f07d972

6758ee33ef7fd87d53e97459d293fa38700c9085

master:

d33c0eb7e880f91774dbed263ffcaf9c83b23e13

fd9d68f2c6e32894f6ac846f899c466b3f07d972

6758ee33ef7fd87d53e97459d293fa38700c9085

> Miss file extension when inserting to hive table with compression
> -
>
> Key: FLINK-16647
> URL: https://issues.apache.org/jira/browse/FLINK-16647
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When {{hive.exec.compress.output}} is on, we write into Hive tables with a 
> compression codec. But we don't append a proper extension to the resulting 
> files, which means these files can't be consumed later on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #11505: [FLINK-16647][table-runtime-blink][hive] Miss file extension when inserting to hive table with compression

2020-03-26 Thread GitBox
JingsongLi merged pull request #11505: [FLINK-16647][table-runtime-blink][hive] 
Miss file extension when inserting to hive table with compression
URL: https://github.com/apache/flink/pull/11505
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #11511: [FLINK-16771][table-planner-blink] NPE when filtering by decimal column

2020-03-26 Thread GitBox
lirui-apache commented on a change in pull request #11511: 
[FLINK-16771][table-planner-blink] NPE when filtering by decimal column
URL: https://github.com/apache/flink/pull/11511#discussion_r399010468
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala
 ##
 @@ -346,7 +346,11 @@ object GenerateUtils {
 ctx.addReusableMember(fieldDecimal)
 val value = Decimal.fromBigDecimal(
   literalValue.asInstanceOf[JBigDecimal], precision, scale)
-generateNonNullLiteral(literalType, fieldTerm, value)
+if (value == null) {
 
 Review comment:
   That won't work because we expect a decimal literal to be a `BigDecimal`. 
But now the value has been casted to a `Decimal `.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] JingsongLi closed pull request #11440: [FLINK-16647][table-runtime-blink][hive] Miss file extension when ins…

2020-03-26 Thread GitBox
JingsongLi closed pull request #11440: [FLINK-16647][table-runtime-blink][hive] 
Miss file extension when ins…
URL: https://github.com/apache/flink/pull/11440
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-16693) Legacy planner incompatible with Timestamp backed by LocalDateTime

2020-03-26 Thread Paul Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17068212#comment-17068212
 ] 

Paul Lin edited comment on FLINK-16693 at 3/27/20, 3:01 AM:


I'd like to fix this issue, would you please assign this issue to me? [~twalthr]

My rough idea is to replace the old type information in TableSinkUtils with the 
new data types. If it's the right approach, then considering that 
`RowType#equals` now compares not only field types but also field names and 
descriptions, we might need to change it to only check for field types as 
`RowTypeInfo#equals` does.


was (Author: paul lin):
I'd like to fix this issue, would you please assign this issue to me? [~twalthr]

My rough idea is to replace the old type information in TableSinkUtils with the 
new data types. If it's the right approach, then considering that 
`RowTpye#equals` now compares not only field types but also field names and 
descriptions, we might need to change it to only check for field types as 
`RowTypeInfo#equals`.

> Legacy planner incompatible with Timestamp backed by LocalDateTime
> --
>
> Key: FLINK-16693
> URL: https://issues.apache.org/jira/browse/FLINK-16693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner
>Affects Versions: 1.10.0
>Reporter: Paul Lin
>Priority: Major
>
> Recently I upgraded a simple application that inserts static data into a 
> table from 1.9.0 to 1.10.0, and 
> encountered a timestamp type incompatibility problem during the table sink 
> validation.
> The SQL is like:
> ```
> insert into kafka.test.tbl_a # schema: (user_name STRING, user_id INT, 
> login_time TIMESTAMP)
> select ("ann", 1000, TIMESTAMP "2019-12-30 00:00:00")
> ```
> And the error thrown:
> ```
> Field types of query result and registered TableSink `kafka`.`test`.`tbl_a` 
> do not match.
>   Query result schema: [EXPR$0: String, EXPR$1: Integer, EXPR$2: 
> Timestamp]
>   TableSink schema:[user_name: String, user_id: Integer, login_time: 
> LocalDateTime]
> ```
> After some digging, I found the root cause might be that since FLINK-14645 
> timestamp fields defined via TableFactory had been bridged to LocalDateTime, 
> but timestamp functions are still backed by java.sql.Timestamp.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16771) NPE when filtering by decimal column

2020-03-26 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-16771:


Assignee: Rui Li

> NPE when filtering by decimal column
> 
>
> Key: FLINK-16771
> URL: https://issues.apache.org/jira/browse/FLINK-16771
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following SQL can trigger the issue:
> {code}
> create table foo (d decimal(15,8));
> insert into foo values (cast('123.123' as decimal(15,8)));
> select * from foo where d>cast('123456789.123' as decimal(15,8));
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #11511: [FLINK-16771][table-planner-blink] NPE when filtering by decimal column

2020-03-26 Thread GitBox
JingsongLi commented on a change in pull request #11511: 
[FLINK-16771][table-planner-blink] NPE when filtering by decimal column
URL: https://github.com/apache/flink/pull/11511#discussion_r399007833
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/api/TableEnvironmentITCase.scala
 ##
 @@ -273,6 +276,18 @@ class TableEnvironmentITCase(tableEnvName: String, 
isStreaming: Boolean) {
 tableEnv.execute("insert dest2")
   }
 
+  @Test
+  def testCompareDecimalColWithNull(): Unit = {
 
 Review comment:
   Can this reproduce by expression test?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   6   7   >