[jira] [Closed] (FLINK-17385) Fix precision problem when converting JDBC numberic into Flink decimal type

2020-05-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-17385.

Resolution: Fixed

master: d33fb620fdb09a4755dd6513f96f0e191da2fcda

> Fix precision problem when converting JDBC numberic into Flink decimal type 
> 
>
> Key: FLINK-17385
> URL: https://issues.apache.org/jira/browse/FLINK-17385
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Flavio Pompermaier
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> This is reported in the mailing list: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBC-error-on-numeric-conversion-because-of-DecimalType-MIN-PRECISION-td34668.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17385) Fix precision problem when converting JDBC numberic into Flink decimal type

2020-05-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-17385:
-
Fix Version/s: 1.11.0

> Fix precision problem when converting JDBC numberic into Flink decimal type 
> 
>
> Key: FLINK-17385
> URL: https://issues.apache.org/jira/browse/FLINK-17385
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Flavio Pompermaier
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> This is reported in the mailing list: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBC-error-on-numeric-conversion-because-of-DecimalType-MIN-PRECISION-td34668.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17392) enable configuring minicluster resources in Flink SQL in IDE

2020-04-29 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-17392.

  Assignee: (was: Kurt Young)
Resolution: Invalid

> enable configuring minicluster resources in Flink SQL in IDE
> 
>
> Key: FLINK-17392
> URL: https://issues.apache.org/jira/browse/FLINK-17392
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Bowen Li
>Priority: Major
>
> It's very common case that users who want to learn and test Flink SQL will 
> try to run a SQL job in IDE like Intellij, with Flink minicluster. Currently 
> it's fine to do so with a simple job requiring only one task slot, which is 
> the default resource config of minicluster.
> However, users cannot run even a little bit more complicated job since they 
> cannot configure task slots of minicluster thru Flink SQL, e.g. single 
> parallelism job requires shuffle. This incapability has been very frustrating 
> to new users.
> There are two solutions to this problem:
> - in minicluster, if it is single parallelism job, then chain all operators 
> together
> - enable configuring minicluster in Flink SQL in IDE.
> The latter feels more proper.
> Expected: users can configure minicluster resources via either SQL ("set 
> ...=...") or TableEnvironment ("tEnv.setMiniclusterResources(..., ...)"). 
> [~jark] [~lzljs3620320]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17284) Support serial field type in PostgresCatalog

2020-04-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-17284:


Assignee: Flavio Pompermaier

> Support serial field type in PostgresCatalog
> 
>
> Key: FLINK-17284
> URL: https://issues.apache.org/jira/browse/FLINK-17284
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.11.0
>Reporter: Flavio Pompermaier
>Assignee: Flavio Pompermaier
>Priority: Major
>  Labels: postgres, pull-request-available
>
> In the current version of the PostgresCatalog the serial type is not handled, 
> while it can be safely mapped to INT.
> See an example at  https://www.postgresqltutorial.com/postgresql-create-table/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17356) Properly set constraints (PK and UNIQUE)

2020-04-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-17356:


Assignee: Flavio Pompermaier

> Properly set constraints (PK and UNIQUE)
> 
>
> Key: FLINK-17356
> URL: https://issues.apache.org/jira/browse/FLINK-17356
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Reporter: Flavio Pompermaier
>Assignee: Flavio Pompermaier
>Priority: Major
>  Labels: pull-request-available
>
> At the moment the PostgresCatalog does not create field constraints (at the 
> moment there's only UNIQUE and  PRIMARY_KEY in the TableSchema..could it 
> worth to add also NOT_NULL?)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16473) add documentation for JDBCCatalog and PostgresCatalog

2020-04-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16473.

Resolution: Fixed

MASTER: f6f2fb56c3ee1d801f0db62fed357f918d40d4d0

> add documentation for JDBCCatalog and PostgresCatalog
> -
>
> Key: FLINK-16473
> URL: https://issues.apache.org/jira/browse/FLINK-16473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Documentation
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17333) add doc for 'create catalog' ddl

2020-04-27 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-17333.

Resolution: Fixed

> add doc for 'create catalog' ddl
> 
>
> Key: FLINK-17333
> URL: https://issues.apache.org/jira/browse/FLINK-17333
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17392) enable configuring minicluster resources in Flink SQL in IDE

2020-04-27 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17093931#comment-17093931
 ] 

Bowen Li commented on FLINK-17392:
--

wait, maybe I misunderstood the problem. I thought it was due to resource 
scheduling but I might be wrong. Let me dig into it more

> enable configuring minicluster resources in Flink SQL in IDE
> 
>
> Key: FLINK-17392
> URL: https://issues.apache.org/jira/browse/FLINK-17392
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Bowen Li
>Assignee: Kurt Young
>Priority: Major
>
> It's very common case that users who want to learn and test Flink SQL will 
> try to run a SQL job in IDE like Intellij, with Flink minicluster. Currently 
> it's fine to do so with a simple job requiring only one task slot, which is 
> the default resource config of minicluster.
> However, users cannot run even a little bit more complicated job since they 
> cannot configure task slots of minicluster thru Flink SQL, e.g. single 
> parallelism job requires shuffle. This incapability has been very frustrating 
> to new users.
> There are two solutions to this problem:
> - in minicluster, if it is single parallelism job, then chain all operators 
> together
> - enable configuring minicluster in Flink SQL in IDE.
> The latter feels more proper.
> Expected: users can configure minicluster resources via either SQL ("set 
> ...=...") or TableEnvironment ("tEnv.setMiniclusterResources(..., ...)"). 
> [~jark] [~lzljs3620320]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17392) enable configuring minicluster in Flink SQL in IDE

2020-04-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-17392:


 Summary: enable configuring minicluster in Flink SQL in IDE
 Key: FLINK-17392
 URL: https://issues.apache.org/jira/browse/FLINK-17392
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.11.0
Reporter: Bowen Li
Assignee: Kurt Young
 Fix For: 1.11.0


It's very common case that users who want to learn and test Flink SQL will try 
to run a SQL job in IDE like Intellij, with Flink minicluster. Currently it's 
fine to do so with a simple job requiring only one task slot, which is the 
default resource config of minicluster.

However, users cannot run even a little bit more complicated job since they 
cannot configure task slots of minicluster thru Flink SQL, e.g. single 
parallelism job requires shuffle. This incapability has been very frustrating 
to new users.

There are two solutions to this problem:
- in minicluster, if it is single parallelism job, then chain all operators 
together
- enable configuring minicluster in Flink SQL in IDE.

The latter feels more proper.

Expected: users can configure minicluster resources via either SQL ("set 
...=...") or TableEnvironment ("tEnv.setMiniclusterResources(..., ...)"). 

[~jark] [~lzljs3620320]




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17392) enable configuring minicluster resources in Flink SQL in IDE

2020-04-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-17392:
-
Summary: enable configuring minicluster resources in Flink SQL in IDE  
(was: enable configuring minicluster in Flink SQL in IDE)

> enable configuring minicluster resources in Flink SQL in IDE
> 
>
> Key: FLINK-17392
> URL: https://issues.apache.org/jira/browse/FLINK-17392
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Bowen Li
>Assignee: Kurt Young
>Priority: Major
> Fix For: 1.11.0
>
>
> It's very common case that users who want to learn and test Flink SQL will 
> try to run a SQL job in IDE like Intellij, with Flink minicluster. Currently 
> it's fine to do so with a simple job requiring only one task slot, which is 
> the default resource config of minicluster.
> However, users cannot run even a little bit more complicated job since they 
> cannot configure task slots of minicluster thru Flink SQL, e.g. single 
> parallelism job requires shuffle. This incapability has been very frustrating 
> to new users.
> There are two solutions to this problem:
> - in minicluster, if it is single parallelism job, then chain all operators 
> together
> - enable configuring minicluster in Flink SQL in IDE.
> The latter feels more proper.
> Expected: users can configure minicluster resources via either SQL ("set 
> ...=...") or TableEnvironment ("tEnv.setMiniclusterResources(..., ...)"). 
> [~jark] [~lzljs3620320]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17375) Clean up CI system related scripts

2020-04-24 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-17375:


Assignee: Robert Metzger  (was: Bowen Li)

> Clean up CI system related scripts
> --
>
> Key: FLINK-17375
> URL: https://issues.apache.org/jira/browse/FLINK-17375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>
> Once we have only one CI system in place for Flink (again), it makes sense to 
> clean up the available scripts:
> - Separate "Azure-specific" from "CI-generic" files (names of files, methods, 
> build profiles)
> - separate "log handling" from "build timeout" in "travis_watchdog"
> - remove workarounds needed because of Travis limitations



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17375) Clean up CI system related scripts

2020-04-24 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091979#comment-17091979
 ] 

Bowen Li commented on FLINK-17375:
--

yep, not sure how it happened. Back to you now :)

> Clean up CI system related scripts
> --
>
> Key: FLINK-17375
> URL: https://issues.apache.org/jira/browse/FLINK-17375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>
> Once we have only one CI system in place for Flink (again), it makes sense to 
> clean up the available scripts:
> - Separate "Azure-specific" from "CI-generic" files (names of files, methods, 
> build profiles)
> - separate "log handling" from "build timeout" in "travis_watchdog"
> - remove workarounds needed because of Travis limitations



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17333) add doc for 'create catalog' ddl

2020-04-24 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091977#comment-17091977
 ] 

Bowen Li commented on FLINK-17333:
--

master: f6f2fb56c3ee1d801f0db62fed357f918d40d4d0

> add doc for 'create catalog' ddl
> 
>
> Key: FLINK-17333
> URL: https://issues.apache.org/jira/browse/FLINK-17333
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17175) StringUtils.arrayToString() should consider Object[] lastly

2020-04-24 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-17175.

Resolution: Fixed

master: 6128bd1060e299d798dc486385991d6142bc7d0d

> StringUtils.arrayToString() should consider Object[] lastly
> ---
>
> Key: FLINK-17175
> URL: https://issues.apache.org/jira/browse/FLINK-17175
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.11.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16812) support array types in PostgresRowConverter

2020-04-24 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16812.

Resolution: Fixed

master: 4121292fbb63dacef29245a2234da68fa499efa6

> support array types in PostgresRowConverter
> ---
>
> Key: FLINK-16812
> URL: https://issues.apache.org/jira/browse/FLINK-16812
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> per https://issues.apache.org/jira/browse/FLINK-16811



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16743) Introduce datagen, print, blackhole connectors

2020-04-24 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091758#comment-17091758
 ] 

Bowen Li commented on FLINK-16743:
--

revive the thread. These will be very useful for users. Hopefully we can get 
them into 1.11

> Introduce datagen, print, blackhole connectors
> --
>
> Key: FLINK-16743
> URL: https://issues.apache.org/jira/browse/FLINK-16743
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Discussion: 
> [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Introduce-TableFactory-for-StatefulSequenceSource-td39116.html]
> Introduce:
>  * DataGeneratorSource
>  * DataGenTableSourceFactory
>  * PrintTableSinkFactory
>  * BlackHoleTableSinkFactory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17375) Clean up CI system related scripts

2020-04-24 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-17375:


Assignee: Bowen Li  (was: Robert Metzger)

> Clean up CI system related scripts
> --
>
> Key: FLINK-17375
> URL: https://issues.apache.org/jira/browse/FLINK-17375
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System, Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Bowen Li
>Priority: Major
>
> Once we have only one CI system in place for Flink (again), it makes sense to 
> clean up the available scripts:
> - Separate "Azure-specific" from "CI-generic" files (names of files, methods, 
> build profiles)
> - separate "log handling" from "build timeout" in "travis_watchdog"
> - remove workarounds needed because of Travis limitations



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17333) add doc for 'create catalog' ddl

2020-04-22 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-17333:
-
Summary: add doc for 'create catalog' ddl  (was: add doc for "create ddl")

> add doc for 'create catalog' ddl
> 
>
> Key: FLINK-17333
> URL: https://issues.apache.org/jira/browse/FLINK-17333
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17333) add doc for "create ddl"

2020-04-22 Thread Bowen Li (Jira)
Bowen Li created FLINK-17333:


 Summary: add doc for "create ddl"
 Key: FLINK-17333
 URL: https://issues.apache.org/jira/browse/FLINK-17333
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16473) add documentation for JDBCCatalog and PostgresCatalog

2020-04-18 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16473:
-
Summary: add documentation for JDBCCatalog and PostgresCatalog  (was: add 
documentation for PostgresJDBCCatalog)

> add documentation for JDBCCatalog and PostgresCatalog
> -
>
> Key: FLINK-16473
> URL: https://issues.apache.org/jira/browse/FLINK-16473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Documentation
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17175) StringUtils.arrayToString() should consider Object[] lastly

2020-04-15 Thread Bowen Li (Jira)
Bowen Li created FLINK-17175:


 Summary: StringUtils.arrayToString() should consider Object[] 
lastly
 Key: FLINK-17175
 URL: https://issues.apache.org/jira/browse/FLINK-17175
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.11.0
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16812) support array types in PostgresRowConverter

2020-04-15 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16812:
-
Summary: support array types in PostgresRowConverter  (was: introduce 
Postgres row converter to PostgresDialect)

> support array types in PostgresRowConverter
> ---
>
> Key: FLINK-16812
> URL: https://issues.apache.org/jira/browse/FLINK-16812
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> per https://issues.apache.org/jira/browse/FLINK-16811



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16820) support reading timestamp, data, and time in JDBCTableSource

2020-04-15 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16820.

Release Note: JDBCTableSource right now supports reading timestamp, data, 
and time types
  Resolution: Fixed

master: aa9bcc15f36676a498944489997239aca5d5093b

> support reading timestamp, data, and time in JDBCTableSource
> 
>
> Key: FLINK-16820
> URL: https://issues.apache.org/jira/browse/FLINK-16820
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16813) JDBCInputFormat doesn't correctly map Short

2020-04-15 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16813.

Resolution: Fixed

master: 3bfeea1aed10586376835ed68dd0d31bdafe5d0f

>  JDBCInputFormat doesn't correctly map Short
> 
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set.
> Short from jdbc result set actually returns an Integer
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16815) add e2e tests for reading primitive data types from postgres with JDBCTableSource and PostgresCatalog

2020-04-15 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16815.

Resolution: Fixed

master: 978d7e9f6f21eeaf36f9995872bb1c32b09a49ee

> add e2e tests for reading primitive data types from postgres with 
> JDBCTableSource and PostgresCatalog
> -
>
> Key: FLINK-16815
> URL: https://issues.apache.org/jira/browse/FLINK-16815
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16820) support reading timestamp, data, and time in JDBCTableSource

2020-04-07 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16820:
-
Summary: support reading timestamp, data, and time in JDBCTableSource  
(was: support reading array of timestamp, data, and time in JDBCTableSource)

> support reading timestamp, data, and time in JDBCTableSource
> 
>
> Key: FLINK-16820
> URL: https://issues.apache.org/jira/browse/FLINK-16820
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17037) add e2e tests for reading array data types from postgres with JDBCTableSource and PostgresCatalog

2020-04-07 Thread Bowen Li (Jira)
Bowen Li created FLINK-17037:


 Summary: add e2e tests for reading array data types from postgres 
with JDBCTableSource and PostgresCatalog
 Key: FLINK-17037
 URL: https://issues.apache.org/jira/browse/FLINK-17037
 Project: Flink
  Issue Type: Sub-task
Reporter: Bowen Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16815) add e2e tests for reading primitive data types from postgres with JDBCTableSource and PostgresCatalog

2020-04-07 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16815:
-
Summary: add e2e tests for reading primitive data types from postgres with 
JDBCTableSource and PostgresCatalog  (was: add e2e tests for reading from 
postgres with JDBCTableSource and PostgresCatalog)

> add e2e tests for reading primitive data types from postgres with 
> JDBCTableSource and PostgresCatalog
> -
>
> Key: FLINK-16815
> URL: https://issues.apache.org/jira/browse/FLINK-16815
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16811) introduce row converter API to JDBCDialect

2020-04-07 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16811.

Resolution: Fixed

> introduce row converter API to JDBCDialect
> --
>
> Key: FLINK-16811
> URL: https://issues.apache.org/jira/browse/FLINK-16811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> we may need to introduce JDBCRowConverter interface to convert a db specific 
> row from jdbc to Flink row.
> E.g. for Postgres, the array returned from jdbc is of PgArray, not java 
> array, where we need to do such conversion in JDBCRowConverter
>  
> Dbs should implement their own row converters.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16817) StringUtils.arrayToString() doesn't convert array of byte array correctly

2020-04-07 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16817.

Resolution: Fixed

> StringUtils.arrayToString() doesn't convert array of byte array correctly
> -
>
> Key: FLINK-16817
> URL: https://issues.apache.org/jira/browse/FLINK-16817
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16817) StringUtils.arrayToString() doesn't convert array of byte array correctly

2020-04-07 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077570#comment-17077570
 ] 

Bowen Li commented on FLINK-16817:
--

master: 989bc02518d1be05d4f2260d9c4a67098df19063

> StringUtils.arrayToString() doesn't convert array of byte array correctly
> -
>
> Key: FLINK-16817
> URL: https://issues.apache.org/jira/browse/FLINK-16817
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16811) introduce row converter API to JDBCDialect

2020-04-07 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17077569#comment-17077569
 ] 

Bowen Li commented on FLINK-16811:
--

master: 3fd568a635ae23e530babe26c98425266e975663

> introduce row converter API to JDBCDialect
> --
>
> Key: FLINK-16811
> URL: https://issues.apache.org/jira/browse/FLINK-16811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> we may need to introduce JDBCRowConverter interface to convert a db specific 
> row from jdbc to Flink row.
> E.g. for Postgres, the array returned from jdbc is of PgArray, not java 
> array, where we need to do such conversion in JDBCRowConverter
>  
> Dbs should implement their own row converters.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16772) Bump derby to 10.12.1.1+ or exclude it

2020-03-31 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-16772:


Assignee: Jingsong Lee

> Bump derby to 10.12.1.1+ or exclude it
> --
>
> Key: FLINK-16772
> URL: https://issues.apache.org/jira/browse/FLINK-16772
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
>Reporter: Chesnay Schepler
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.10.1, 1.11.0
>
>
> {{hive-metastore}} depends on derby 10.10/10.4, which are vulnerable to 
> [CVE-2015-1832|https://nvd.nist.gov/vuln/detail/CVE-2015-1832].
> We should bump the version to at least 10.12.1.1 .
> Assuming that derby is only required for the server and not the client we 
> could potentially even exclude it.
> [~phoenixjiangnan] Can you help with this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16820) support reading array of timestamp, data, and time in JDBCTableSource

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16820:


 Summary: support reading array of timestamp, data, and time in 
JDBCTableSource
 Key: FLINK-16820
 URL: https://issues.apache.org/jira/browse/FLINK-16820
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16814) StringUtils.arrayToString() doesn't convert byte[] correctly

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16814.

Resolution: Invalid

> StringUtils.arrayToString() doesn't convert byte[] correctly
> 
>
> Key: FLINK-16814
> URL: https://issues.apache.org/jira/browse/FLINK-16814
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> StringUtils.arrayToString() doesn't convert byte[] correctly. It uses 
> Arrays.toString() but should be newing a string from the byte[]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16817) StringUtils.arrayToString() doesn't convert array of byte array correctly

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16817:
-
Summary: StringUtils.arrayToString() doesn't convert array of byte array 
correctly  (was: StringUtils.arrayToString() doesn't convert byte[][] correctly)

> StringUtils.arrayToString() doesn't convert array of byte array correctly
> -
>
> Key: FLINK-16817
> URL: https://issues.apache.org/jira/browse/FLINK-16817
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16817) StringUtils.arrayToString() doesn't convert byte[][] correctly

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16817:


 Summary: StringUtils.arrayToString() doesn't convert byte[][] 
correctly
 Key: FLINK-16817
 URL: https://issues.apache.org/jira/browse/FLINK-16817
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16816) planner doesn't parse timestamp and date array correctly

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16816:
-
Description: 
planner doesn't parse timestamp and date array correctly.

 

Repro: 

In a input format (like JBDCInputFormat)'s \{{nextRecord(Row)}} API
 # when setting a timestamp datum as java.sql.Timestamp/Date, it works fine
 # when setting an array of timestamp datums as java.sql.Timestamp[]/Date[], it 
breaks and below is the strack trace

 
{code:java}
/Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
java.time.LocalDateTime
at 
org.apache.flink.table.dataformat.DataFormatConverters$LocalDateTimeConverter.toInternalImpl(DataFormatConverters.java:748)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toBinaryArray(DataFormatConverters.java:1110)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1093)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1068)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1377)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1365)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at SourceConversion$1.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:714)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:689)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:669)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
{code}

seems that planner runtime handles java.sql.Timetamp in these two cases 
differently

  was:
planner doesn't parse timestamp and date array correctly.

 

Repro: 

In a input format (like JBDCInputFormat)'s \{{nextRecord(Row)}} API
 # when setting a timestamp datum as java.sql.Timestamp, it works fine
 # when setting an array of timestamp datums as java.sql.Timestamp[], it breaks 
and below is the strack trace

 
{code:java}
/Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
java.time.LocalDateTime
at 
org.apache.flink.table.dataformat.DataFormatConverters$LocalDateTimeConverter.toInternalImpl(DataFormatConverters.java:748)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toBinaryArray(DataFormatConverters.java:1110)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1093)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1068)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1377)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1365)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at SourceConversion$1.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:714)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:689)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:669)
at 

[jira] [Updated] (FLINK-16816) planner doesn't parse timestamp and date array correctly

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16816:
-
Summary: planner doesn't parse timestamp and date array correctly  (was: 
planner doesn't parse timestamp array correctly)

> planner doesn't parse timestamp and date array correctly
> 
>
> Key: FLINK-16816
> URL: https://issues.apache.org/jira/browse/FLINK-16816
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Bowen Li
>Assignee: Kurt Young
>Priority: Major
> Fix For: 1.11.0
>
>
> planner doesn't parse timestamp array correctly.
>  
> Repro: 
> In a input format (like JBDCInputFormat)'s \{{nextRecord(Row)}} API
>  # when setting a timestamp datum as java.sql.Timestamp, it works fine
>  # when setting an array of timestamp datums as java.sql.Timestamp[], it 
> breaks and below is the strack trace
>  
> {code:java}
> /Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast 
> to java.time.LocalDateTime
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$LocalDateTimeConverter.toInternalImpl(DataFormatConverters.java:748)
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toBinaryArray(DataFormatConverters.java:1110)
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1093)
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1068)
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1377)
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1365)
>   at 
> org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
>   at SourceConversion$1.processElement(Unknown Source)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:714)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:689)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:669)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
>   at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
>   at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
> {code}
> seems that planner runtime handles java.sql.Timetamp in these two cases 
> differently



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16816) planner doesn't parse timestamp and date array correctly

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16816:
-
Description: 
planner doesn't parse timestamp and date array correctly.

 

Repro: 

In a input format (like JBDCInputFormat)'s \{{nextRecord(Row)}} API
 # when setting a timestamp datum as java.sql.Timestamp, it works fine
 # when setting an array of timestamp datums as java.sql.Timestamp[], it breaks 
and below is the strack trace

 
{code:java}
/Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
java.time.LocalDateTime
at 
org.apache.flink.table.dataformat.DataFormatConverters$LocalDateTimeConverter.toInternalImpl(DataFormatConverters.java:748)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toBinaryArray(DataFormatConverters.java:1110)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1093)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1068)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1377)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1365)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at SourceConversion$1.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:714)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:689)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:669)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
{code}

seems that planner runtime handles java.sql.Timetamp in these two cases 
differently

  was:
planner doesn't parse timestamp array correctly.

 

Repro: 

In a input format (like JBDCInputFormat)'s \{{nextRecord(Row)}} API
 # when setting a timestamp datum as java.sql.Timestamp, it works fine
 # when setting an array of timestamp datums as java.sql.Timestamp[], it breaks 
and below is the strack trace

 
{code:java}
/Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
java.time.LocalDateTime
at 
org.apache.flink.table.dataformat.DataFormatConverters$LocalDateTimeConverter.toInternalImpl(DataFormatConverters.java:748)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toBinaryArray(DataFormatConverters.java:1110)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1093)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1068)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1377)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1365)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at SourceConversion$1.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:714)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:689)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:669)
at 

[jira] [Created] (FLINK-16816) planner doesn't parse timestamp array correctly

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16816:


 Summary: planner doesn't parse timestamp array correctly
 Key: FLINK-16816
 URL: https://issues.apache.org/jira/browse/FLINK-16816
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner, Table SQL / Runtime
Reporter: Bowen Li
Assignee: Kurt Young
 Fix For: 1.11.0


planner doesn't parse timestamp array correctly.

 

Repro: 

In a input format (like JBDCInputFormat)'s \{{nextRecord(Row)}} API
 # when setting a timestamp datum as java.sql.Timestamp, it works fine
 # when setting an array of timestamp datums as java.sql.Timestamp[], it breaks 
and below is the strack trace

 
{code:java}
/Caused by: java.lang.ClassCastException: java.sql.Timestamp cannot be cast to 
java.time.LocalDateTime
at 
org.apache.flink.table.dataformat.DataFormatConverters$LocalDateTimeConverter.toInternalImpl(DataFormatConverters.java:748)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toBinaryArray(DataFormatConverters.java:1110)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1093)
at 
org.apache.flink.table.dataformat.DataFormatConverters$ObjectArrayConverter.toInternalImpl(DataFormatConverters.java:1068)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1377)
at 
org.apache.flink.table.dataformat.DataFormatConverters$RowConverter.toInternalImpl(DataFormatConverters.java:1365)
at 
org.apache.flink.table.dataformat.DataFormatConverters$DataFormatConverter.toInternal(DataFormatConverters.java:344)
at SourceConversion$1.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:714)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:689)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:669)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:208)
{code}

seems that planner runtime handles java.sql.Timetamp in these two cases 
differently



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16811) introduce row converter API to JDBCDialect

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16811:
-
Summary: introduce row converter API to JDBCDialect  (was: introduce 
JDBCRowConverter)

> introduce row converter API to JDBCDialect
> --
>
> Key: FLINK-16811
> URL: https://issues.apache.org/jira/browse/FLINK-16811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> we may need to introduce JDBCRowConverter interface to convert a db specific 
> row from jdbc to Flink row.
> E.g. for Postgres, the array returned from jdbc is of PgArray, not java 
> array, where we need to do such conversion in JDBCRowConverter
>  
> Dbs should implement their own row converters.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16812) introduce Postgres row converter to PostgresDialect

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16812:
-
Summary: introduce Postgres row converter to PostgresDialect  (was: 
introduce PostgresRowConverter)

> introduce Postgres row converter to PostgresDialect
> ---
>
> Key: FLINK-16812
> URL: https://issues.apache.org/jira/browse/FLINK-16812
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> per https://issues.apache.org/jira/browse/FLINK-16811



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16815) add e2e tests for reading from postgres with JDBCTableSource and PostgresCatalog

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16815:


 Summary: add e2e tests for reading from postgres with 
JDBCTableSource and PostgresCatalog
 Key: FLINK-16815
 URL: https://issues.apache.org/jira/browse/FLINK-16815
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16813) JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16813:
-
Summary:  JDBCInputFormat doesn't correctly map Short  (was:  
JDBCInputFormat doesn't correctly map Short and Bytes)

>  JDBCInputFormat doesn't correctly map Short
> 
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set.
> Short from jdbc result set actually returns an Integer
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16810) add back PostgresCatalogITCase

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16810.

Resolution: Fixed

> add back PostgresCatalogITCase
> --
>
> Key: FLINK-16810
> URL: https://issues.apache.org/jira/browse/FLINK-16810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16811) introduce JDBCRowConverter

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16811:
-
Description: 
we may need to introduce JDBCRowConverter interface to convert a db specific 
row from jdbc to Flink row.

E.g. for Postgres, the array returned from jdbc is of PgArray, not java array, 
where we need to do such conversion in JDBCRowConverter

 

Dbs should implement their own row converters.

 

 

  was:
we may need to introduce JDBCRowConverter interface to convert a db specific 
row from jdbc to Flink row. Dbs should implement their own row converters.

 

 


> introduce JDBCRowConverter
> --
>
> Key: FLINK-16811
> URL: https://issues.apache.org/jira/browse/FLINK-16811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> we may need to introduce JDBCRowConverter interface to convert a db specific 
> row from jdbc to Flink row.
> E.g. for Postgres, the array returned from jdbc is of PgArray, not java 
> array, where we need to do such conversion in JDBCRowConverter
>  
> Dbs should implement their own row converters.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16814) StringUtils.arrayToString() doesn't convert byte[] correctly

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16814:


 Summary: StringUtils.arrayToString() doesn't convert byte[] 
correctly
 Key: FLINK-16814
 URL: https://issues.apache.org/jira/browse/FLINK-16814
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.10.0
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0


StringUtils.arrayToString() doesn't convert byte[] correctly. It uses 
Arrays.toString() but should be newing a string from the byte[]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16813) JDBCInputFormat doesn't correctly map Short and Bytes

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16813:
-
Description: 
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set.

Short from jdbc result set actually returns an Integer

 

  was:
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set.

Short from jdbc result set actually returns an Integer, and bytes need to be 
converted to byte[]

 


>  JDBCInputFormat doesn't correctly map Short and Bytes
> --
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set.
> Short from jdbc result set actually returns an Integer
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16813) JDBCInputFormat doesn't correctly map Short and Bytes

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16813:
-
Description: 
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set.

Short from jdbc result set actually returns an Integer, and bytes need to be 
converted to byte[]

 

  was:
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set.

Problem is that short from jdbc result set actually returns an Integer.

 


>  JDBCInputFormat doesn't correctly map Short and Bytes
> --
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set.
> Short from jdbc result set actually returns an Integer, and bytes need to be 
> converted to byte[]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16813) JDBCInputFormat doesn't correctly map Short and Bytes

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16813:
-
Summary:  JDBCInputFormat doesn't correctly map Short and Bytes  (was:  
JDBCInputFormat doesn't correctly map Short)

>  JDBCInputFormat doesn't correctly map Short and Bytes
> --
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set.
> Problem is that short from jdbc result set actually returns an Integer.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16813) JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16813:
-
Description: 
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set.

Problem is that short from jdbc result set actually returns an Integer via jdbc.

 

  was:
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set. Problem is that object 
from jdbc result set doesn't always match the corresponding type in relational 
db. E.g. a short column in Postgres actually returns an Integer via jdbc.

 


>  JDBCInputFormat doesn't correctly map Short
> 
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set.
> Problem is that short from jdbc result set actually returns an Integer via 
> jdbc.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16813) JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16813:
-
Description: 
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set.

Problem is that short from jdbc result set actually returns an Integer.

 

  was:
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set.

Problem is that short from jdbc result set actually returns an Integer via jdbc.

 


>  JDBCInputFormat doesn't correctly map Short
> 
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set.
> Problem is that short from jdbc result set actually returns an Integer.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16813) JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16813:
-
Parent: (was: FLINK-15350)
Issue Type: Bug  (was: Sub-task)

>  JDBCInputFormat doesn't correctly map Short
> 
>
> Key: FLINK-16813
> URL: https://issues.apache.org/jira/browse/FLINK-16813
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set. Problem is that 
> object from jdbc result set doesn't always match the corresponding type in 
> relational db. E.g. a short column in Postgres actually returns an Integer 
> via jdbc.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16811) introduce JDBCRowConverter

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16811:
-
Description: 
we may need to introduce JDBCRowConverter interface to convert a db specific 
row from jdbc to Flink row. Dbs should implement their own row converters.

 

 

  was:
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set. Problem is that object 
from jdbc result set doesn't always match the corresponding type in relational 
db. E.g. a short column in Postgres actually returns an Integer via jdbc. And 
such mismatch can be db-dependent. 

Thus, we introduce JDBCRowConverter interface to convert a db specific row from 
jdbc to Flink row. Dbs should implement their own row converters.

 

 


> introduce JDBCRowConverter
> --
>
> Key: FLINK-16811
> URL: https://issues.apache.org/jira/browse/FLINK-16811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> we may need to introduce JDBCRowConverter interface to convert a db specific 
> row from jdbc to Flink row. Dbs should implement their own row converters.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16813) JDBCInputFormat doesn't correctly map Short

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16813:


 Summary:  JDBCInputFormat doesn't correctly map Short
 Key: FLINK-16813
 URL: https://issues.apache.org/jira/browse/FLINK-16813
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Affects Versions: 1.10.0
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0


currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set. Problem is that object 
from jdbc result set doesn't always match the corresponding type in relational 
db. E.g. a short column in Postgres actually returns an Integer via jdbc.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16811) introduce JDBCRowConverter

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16811:
-
Description: 
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set. Problem is that object 
from jdbc result set doesn't always match the corresponding type in relational 
db. E.g. a short column in Postgres actually returns an Integer via jdbc. And 
such mismatch can be db-dependent. 

Thus, we introduce JDBCRowConverter interface to convert a db specific row from 
jdbc to Flink row. Dbs should implement their own row converters.

 

 

  was:
currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set. Problem is that object 
from jdbc result set doesn't always match the corresponding type in relational 
db. E.g. a short column in Postgres actually returns a Integer via jdbc. And 
such mismatch can be db-dependent.

 

Thus, we introduce JDBCRowConverter interface to convert a db specific row from 
jdbc to Flink row. Dbs should implement their own row converters.

 

 


> introduce JDBCRowConverter
> --
>
> Key: FLINK-16811
> URL: https://issues.apache.org/jira/browse/FLINK-16811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set. Problem is that 
> object from jdbc result set doesn't always match the corresponding type in 
> relational db. E.g. a short column in Postgres actually returns an Integer 
> via jdbc. And such mismatch can be db-dependent. 
> Thus, we introduce JDBCRowConverter interface to convert a db specific row 
> from jdbc to Flink row. Dbs should implement their own row converters.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16812) introduce PostgresRowConverter

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16812:


 Summary: introduce PostgresRowConverter
 Key: FLINK-16812
 URL: https://issues.apache.org/jira/browse/FLINK-16812
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0


per https://issues.apache.org/jira/browse/FLINK-16811



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16811) introduce JDBCRowConverter

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16811:
-
Parent: FLINK-15350
Issue Type: Sub-task  (was: Improvement)

> introduce JDBCRowConverter
> --
>
> Key: FLINK-16811
> URL: https://issues.apache.org/jira/browse/FLINK-16811
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>
> currently when JDBCInputFormat converts a JDBC result set row to Flink Row, 
> it doesn't check the type returned from jdbc result set. Problem is that 
> object from jdbc result set doesn't always match the corresponding type in 
> relational db. E.g. a short column in Postgres actually returns a Integer via 
> jdbc. And such mismatch can be db-dependent.
>  
> Thus, we introduce JDBCRowConverter interface to convert a db specific row 
> from jdbc to Flink row. Dbs should implement their own row converters.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16811) introduce JDBCRowConverter

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16811:


 Summary: introduce JDBCRowConverter
 Key: FLINK-16811
 URL: https://issues.apache.org/jira/browse/FLINK-16811
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0


currently when JDBCInputFormat converts a JDBC result set row to Flink Row, it 
doesn't check the type returned from jdbc result set. Problem is that object 
from jdbc result set doesn't always match the corresponding type in relational 
db. E.g. a short column in Postgres actually returns a Integer via jdbc. And 
such mismatch can be db-dependent.

 

Thus, we introduce JDBCRowConverter interface to convert a db specific row from 
jdbc to Flink row. Dbs should implement their own row converters.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16810) add back PostgresCatalogITCase

2020-03-26 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067934#comment-17067934
 ] 

Bowen Li commented on FLINK-16810:
--

master: 8cf8d8d86121092714ef71109f91670506457710

> add back PostgresCatalogITCase
> --
>
> Key: FLINK-16810
> URL: https://issues.apache.org/jira/browse/FLINK-16810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16810) add back PostgresCatalogITCase

2020-03-26 Thread Bowen Li (Jira)
Bowen Li created FLINK-16810:


 Summary: add back PostgresCatalogITCase
 Key: FLINK-16810
 URL: https://issues.apache.org/jira/browse/FLINK-16810
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16498) connect PostgresCatalog to table planner

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16498.

Resolution: Fixed

master: 25f2d626043145c3a87373754d79b512394017a8

> connect PostgresCatalog to table planner
> 
>
> Key: FLINK-16498
> URL: https://issues.apache.org/jira/browse/FLINK-16498
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16702) develop JDBCCatalogFactory, descriptor, and validator for service discovery

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16702.

Resolution: Fixed

master: ce52f3bcddc032cc4b3cb54c33eb1376df42c887

> develop JDBCCatalogFactory, descriptor, and validator for service discovery
> ---
>
> Key: FLINK-16702
> URL: https://issues.apache.org/jira/browse/FLINK-16702
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16801) PostgresCatalogITCase fails with IOException

2020-03-26 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li reassigned FLINK-16801:


Assignee: Bowen Li

> PostgresCatalogITCase fails with IOException
> 
>
> Key: FLINK-16801
> URL: https://issues.apache.org/jira/browse/FLINK-16801
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / JDBC
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Bowen Li
>Priority: Major
>
> CI: 
> https://travis-ci.org/github/apache/flink/jobs/666922577?utm_medium=notification_source=slack
> {code}
> 07:03:47.913 [INFO] Running 
> org.apache.flink.api.java.io.jdbc.JDBCTableSourceITCase
> 07:03:50.588 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 16.693 s <<< FAILURE! - in 
> org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogITCase
> 07:03:50.595 [ERROR] 
> org.apache.flink.api.java.io.jdbc.catalog.PostgresCatalogITCase  Time 
> elapsed: 16.693 s  <<< ERROR!
> java.io.IOException: Gave up waiting for server to start after 1ms
> Caused by: java.sql.SQLException: connect failed
> Caused by: java.net.ConnectException: Connection refused (Connection refused)
> Thu Mar 26 07:03:50 UTC 2020 Thread[main,5,main] 
> java.lang.NoSuchFieldException: DEV_NULL
> 
> Thu Mar 26 07:03:51 UTC 2020:
> Booting Derby version The Apache Software Foundation - Apache Derby - 
> 10.14.2.0 - (1828579): instance a816c00e-0171-15a7-7fa7-0c06c410 
> on database directory 
> memory:/home/travis/build/apache/flink/flink-connectors/flink-jdbc/target/test
>  with class loader sun.misc.Launcher$AppClassLoader@677327b6 
> Loaded from 
> file:/home/travis/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
> java.vendor=Private Build
> java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
> user.dir=/home/travis/build/apache/flink/flink-connectors/flink-jdbc/target
> os.name=Linux
> os.arch=amd64
> os.version=4.15.0-1055-gcp
> derby.system.home=null
> derby.stream.error.field=org.apache.flink.api.java.io.jdbc.JDBCTestBase.DEV_NULL
> Database Class Loader started - derby.database.classpath=''
> 07:03:51.916 [INFO] Running 
> org.apache.flink.api.java.io.jdbc.JDBCLookupFunctionITCase
> 07:03:59.956 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 12.041 s - in org.apache.flink.api.java.io.jdbc.JDBCTableSourceITCase
> 07:04:04.193 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 12.275 s - in 
> org.apache.flink.api.java.io.jdbc.JDBCLookupFunctionITCase
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16781) add built-in cache mechanism for LookupableTableSource in lookup join

2020-03-25 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16781:
-
Description: 
Currently there's no built-in cache mechanism for LookupableTableSource. 
Developers have one of the following options:

1) build their own cache

2) use external cache services, like redis, memcached, etc, which Flink 
unfortunately doesn't have an out-of-box lookup function either

3) give up caching and bear with poor lookup performance

 

Flink should provide a generic caching layer for all the LookupableTableSource 
to take advantage of.

 

cc [~ykt836] [~jark] [~lzljs3620320]

  was:
Currently there's no built-in cache mechanism for LookupableTableSource. 
Developers have to either build their own cache or give up caching and bear 
with poor lookup performance.

Flink should provide a generic caching layer for all the LookupableTableSource 
to take advantage of.

cc [~ykt836] [~jark] [~lzljs3620320]


> add built-in cache mechanism for LookupableTableSource in lookup join
> -
>
> Key: FLINK-16781
> URL: https://issues.apache.org/jira/browse/FLINK-16781
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Bowen Li
>Priority: Major
>
> Currently there's no built-in cache mechanism for LookupableTableSource. 
> Developers have one of the following options:
> 1) build their own cache
> 2) use external cache services, like redis, memcached, etc, which Flink 
> unfortunately doesn't have an out-of-box lookup function either
> 3) give up caching and bear with poor lookup performance
>  
> Flink should provide a generic caching layer for all the 
> LookupableTableSource to take advantage of.
>  
> cc [~ykt836] [~jark] [~lzljs3620320]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16780) improve Flink lookup join

2020-03-25 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066930#comment-17066930
 ] 

Bowen Li commented on FLINK-16780:
--

cc [~ykt836] [~jark] [~lzljs3620320]

> improve Flink lookup join 
> --
>
> Key: FLINK-16780
> URL: https://issues.apache.org/jira/browse/FLINK-16780
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API, Table SQL / Planner
>Reporter: Bowen Li
>Priority: Major
>
> this is an umbrella ticket to group all the improvements related to lookup 
> join in Flink



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16781) add built-in cache mechanism for LookupableTableSource in lookup join

2020-03-25 Thread Bowen Li (Jira)
Bowen Li created FLINK-16781:


 Summary: add built-in cache mechanism for LookupableTableSource in 
lookup join
 Key: FLINK-16781
 URL: https://issues.apache.org/jira/browse/FLINK-16781
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: Bowen Li


Currently there's no built-in cache mechanism for LookupableTableSource. 
Developers have to either build their own cache or give up caching and bear 
with poor lookup performance.

Flink should provide a generic caching layer for all the LookupableTableSource 
to take advantage of.

cc [~ykt836] [~jark] [~lzljs3620320]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16780) improve Flink lookup join

2020-03-25 Thread Bowen Li (Jira)
Bowen Li created FLINK-16780:


 Summary: improve Flink lookup join 
 Key: FLINK-16780
 URL: https://issues.apache.org/jira/browse/FLINK-16780
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Planner
Reporter: Bowen Li


this is an umbrella ticket to group all the improvements related to lookup join 
in Flink



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14902) JDBCTableSource support AsyncLookupFunction

2020-03-25 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17066915#comment-17066915
 ] 

Bowen Li commented on FLINK-14902:
--

[~hailong wang] [~jark] [~lzljs3620320] Hi , what's the status of the PR? Can 
we make it into 1.11?

> JDBCTableSource support AsyncLookupFunction
> ---
>
> Key: FLINK-14902
> URL: https://issues.apache.org/jira/browse/FLINK-14902
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.9.0
>Reporter: hailong wang
>Assignee: hailong wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> JDBCTableSource support AsyncLookupFunction



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16498) connect PostgresCatalog to table planner

2020-03-23 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16498:
-
Summary: connect PostgresCatalog to table planner  (was: make Postgres 
table work end-2-end in Flink SQL with PostgresJDBCCatalog)

> connect PostgresCatalog to table planner
> 
>
> Key: FLINK-16498
> URL: https://issues.apache.org/jira/browse/FLINK-16498
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16702) develop JDBCCatalogFactory, descriptor, and validator for service discovery

2020-03-20 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16702:
-
Summary: develop JDBCCatalogFactory, descriptor, and validator for service 
discovery  (was: develop JDBCCatalogFactory for service discovery)

> develop JDBCCatalogFactory, descriptor, and validator for service discovery
> ---
>
> Key: FLINK-16702
> URL: https://issues.apache.org/jira/browse/FLINK-16702
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16702) develop JDBCCatalogFactory for service discovery

2020-03-20 Thread Bowen Li (Jira)
Bowen Li created FLINK-16702:


 Summary: develop JDBCCatalogFactory for service discovery
 Key: FLINK-16702
 URL: https://issues.apache.org/jira/browse/FLINK-16702
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16472) support precision of timestamp and time data types

2020-03-20 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16472.

Resolution: Fixed

master: 75ad29cb9f4f377df27b71e67dbd33f36bb08bee

> support precision of timestamp and time data types
> --
>
> Key: FLINK-16472
> URL: https://issues.apache.org/jira/browse/FLINK-16472
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16471) develop PostgresCatalog

2020-03-20 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-16471.

Release Note: added JDBCCatalog and PostgresCatalog. They help Flink 
connect to relational databases and leverage their metadata for SQL jobs, save 
users time from manually typing in table schemas and other metadata.
  Resolution: Fixed

74b8bdee9fb0bf7cfc27ca8d992dac2a07473a0c

> develop PostgresCatalog
> ---
>
> Key: FLINK-16471
> URL: https://issues.apache.org/jira/browse/FLINK-16471
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16471) develop PostgresCatalog

2020-03-20 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16471:
-
Summary: develop PostgresCatalog  (was: develop PostgresJDBCCatalog)

> develop PostgresCatalog
> ---
>
> Key: FLINK-16471
> URL: https://issues.apache.org/jira/browse/FLINK-16471
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16575) develop HBaseCatalog to integrate HBase metadata into Flink

2020-03-13 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16575:
-
Description: 
develop HBaseCatalog to integrate HBase metadata into Flink

The ticket includes necessary initial investigation to see if it's possible and 
brings practical value, since hbase/elasticsearch are schemaless.
 
If it is valuable, then partition/function/stats/views probably shouldn't be 
implemented, which would be very similar to PostgresCatalog 
([https://github.com/apache/flink/pull/11336]). HiveCatalog can also be a good 
reference.

  was:develop HBaseCatalog to integrate HBase metadata into Flink


> develop HBaseCatalog to integrate HBase metadata into Flink
> ---
>
> Key: FLINK-16575
> URL: https://issues.apache.org/jira/browse/FLINK-16575
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Reporter: Bowen Li
>Priority: Major
>
> develop HBaseCatalog to integrate HBase metadata into Flink
> The ticket includes necessary initial investigation to see if it's possible 
> and brings practical value, since hbase/elasticsearch are schemaless.
>  
> If it is valuable, then partition/function/stats/views probably shouldn't be 
> implemented, which would be very similar to PostgresCatalog 
> ([https://github.com/apache/flink/pull/11336]). HiveCatalog can also be a 
> good reference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16575) develop HBaseCatalog to integrate HBase metadata into Flink

2020-03-13 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17058974#comment-17058974
 ] 

Bowen Li commented on FLINK-16575:
--

this ticket would involve initial research to see if it is feasible and has 
practical value

> develop HBaseCatalog to integrate HBase metadata into Flink
> ---
>
> Key: FLINK-16575
> URL: https://issues.apache.org/jira/browse/FLINK-16575
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / HBase
>Reporter: Bowen Li
>Priority: Major
>
> develop HBaseCatalog to integrate HBase metadata into Flink



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16575) develop HBaseCatalog to integrate HBase metadata into Flink

2020-03-12 Thread Bowen Li (Jira)
Bowen Li created FLINK-16575:


 Summary: develop HBaseCatalog to integrate HBase metadata into 
Flink
 Key: FLINK-16575
 URL: https://issues.apache.org/jira/browse/FLINK-16575
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / HBase
Reporter: Bowen Li


develop HBaseCatalog to integrate HBase metadata into Flink



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16492) Flink SQL can not read decimal in hive parquet

2020-03-09 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17054691#comment-17054691
 ] 

Bowen Li commented on FLINK-16492:
--

cc [~lirui] [~lzljs3620320]

> Flink SQL can not read decimal in hive parquet
> --
>
> Key: FLINK-16492
> URL: https://issues.apache.org/jira/browse/FLINK-16492
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.10.0
> Environment: ||name||version||
> |flink|1.10.0|
> |hive|1.1.0-cdh5.7.2|
> |hive-exec|hive-exec-1.1.0-cdh5.7.2.jar|
> |hive-metastore|hive-metastore-1.1.0-cdh5.7.2.jar|
>Reporter: xingoo
>Priority: Major
>
> data:
> {code:java}
> //代码占位符
> {"a":1.3,"b":"b1","d":"1"}
> {"a":2.4,"c":"c2","d":"1"}
> {"a":5.6,"b":"b3","c":"c3","d":"1"}
> {code}
> error:
> {code:java}
> //代码占位符
> 2020-03-09 09:03:25,726 ERROR 
> com.ververica.flink.table.gateway.rest.handler.ResultFetchHandler  - 
> Unhandled exception.
> com.ververica.flink.table.gateway.SqlExecutionException: Error while 
> submitting job.
>   at 
> com.ververica.flink.table.gateway.result.BatchResult.lambda$startRetrieval$1(BatchResult.java:78)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:739)
>   at 
> java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
>   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: e94ae4c5c190bb6b2b92c3ccf893aa1d)
>   at 
> org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:112)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$21(RestClusterClient.java:565)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:291)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>   at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:561)
>   at 
> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:929)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.flink.client.program.ProgramInvocationException: Job 
> failed (JobID: e94ae4c5c190bb6b2b92c3ccf893aa1d)
>   ... 20 more
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
>   at 
> org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:110)
>   ... 19 more
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
>   

[jira] [Updated] (FLINK-15352) develop MySQLCatalog to connect Flink with MySQL tables and ecosystem

2020-03-08 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15352:
-
Summary: develop MySQLCatalog  to connect Flink with MySQL tables and 
ecosystem  (was: develop MySQLJDBCCatalog  to connect Flink with MySQL tables 
and ecosystem)

> develop MySQLCatalog  to connect Flink with MySQL tables and ecosystem
> --
>
> Key: FLINK-15352
> URL: https://issues.apache.org/jira/browse/FLINK-15352
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16474) develop OracleCatalog to connect Flink with Oracle databases and ecosystem

2020-03-08 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16474:
-
Summary: develop OracleCatalog to connect Flink with Oracle databases and 
ecosystem  (was: develop OracleJDBCCatalog to connect Flink with Oracle 
databases and ecosystem)

> develop OracleCatalog to connect Flink with Oracle databases and ecosystem
> --
>
> Key: FLINK-16474
> URL: https://issues.apache.org/jira/browse/FLINK-16474
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15351) develop PostgresCatalog to connect Flink with Postgres tables and ecosystem

2020-03-08 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15351:
-
Summary: develop PostgresCatalog to connect Flink with Postgres tables and 
ecosystem  (was: develop PostgresJDBCCatalog to connect Flink with Postgres 
tables and ecosystem)

> develop PostgresCatalog to connect Flink with Postgres tables and ecosystem
> ---
>
> Key: FLINK-15351
> URL: https://issues.apache.org/jira/browse/FLINK-15351
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16498) make Postgres table work end-2-end in Flink SQL with PostgresJDBCCatalog

2020-03-08 Thread Bowen Li (Jira)
Bowen Li created FLINK-16498:


 Summary: make Postgres table work end-2-end in Flink SQL with 
PostgresJDBCCatalog
 Key: FLINK-16498
 URL: https://issues.apache.org/jira/browse/FLINK-16498
 Project: Flink
  Issue Type: Sub-task
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16474) develop OracleJDBCCatalog to connect Flink with Oracle databases and ecosystem

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16474:
-
Component/s: Connectors / JDBC

> develop OracleJDBCCatalog to connect Flink with Oracle databases and ecosystem
> --
>
> Key: FLINK-16474
> URL: https://issues.apache.org/jira/browse/FLINK-16474
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16474) develop OracleJDBCCatalog to connect Flink with Oracle databases and ecosystem

2020-03-06 Thread Bowen Li (Jira)
Bowen Li created FLINK-16474:


 Summary: develop OracleJDBCCatalog to connect Flink with Oracle 
databases and ecosystem
 Key: FLINK-16474
 URL: https://issues.apache.org/jira/browse/FLINK-16474
 Project: Flink
  Issue Type: New Feature
Reporter: Bowen Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16473) add documentation for PostgresJDBCCatalog

2020-03-06 Thread Bowen Li (Jira)
Bowen Li created FLINK-16473:


 Summary: add documentation for PostgresJDBCCatalog
 Key: FLINK-16473
 URL: https://issues.apache.org/jira/browse/FLINK-16473
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16471) develop PostgresJDBCCatalog

2020-03-06 Thread Bowen Li (Jira)
Bowen Li created FLINK-16471:


 Summary: develop PostgresJDBCCatalog
 Key: FLINK-16471
 URL: https://issues.apache.org/jira/browse/FLINK-16471
 Project: Flink
  Issue Type: Sub-task
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16472) support precision of timestamp and time data types

2020-03-06 Thread Bowen Li (Jira)
Bowen Li created FLINK-16472:


 Summary: support precision of timestamp and time data types
 Key: FLINK-16472
 URL: https://issues.apache.org/jira/browse/FLINK-16472
 Project: Flink
  Issue Type: Sub-task
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.11.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15353) develop AbstractJDBCCatalog

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15353:
-
Summary: develop AbstractJDBCCatalog  (was: develop JDBCCatalog)

> develop AbstractJDBCCatalog
> ---
>
> Key: FLINK-15353
> URL: https://issues.apache.org/jira/browse/FLINK-15353
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15351) develop PostgresJDBCCatalog to connect Flink with Postgres tables and ecosystem

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15351:
-
Parent: (was: FLINK-15350)
Issue Type: New Feature  (was: Sub-task)

> develop PostgresJDBCCatalog to connect Flink with Postgres tables and 
> ecosystem
> ---
>
> Key: FLINK-15351
> URL: https://issues.apache.org/jira/browse/FLINK-15351
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-15353) develop AbstractJDBCCatalog

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-15353.

Resolution: Invalid

> develop AbstractJDBCCatalog
> ---
>
> Key: FLINK-15353
> URL: https://issues.apache.org/jira/browse/FLINK-15353
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15352) develop MySQLJDBCCatalog to connect Flink with MySQL tables and ecosystem

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-15352:
-
Parent: (was: FLINK-15350)
Issue Type: New Feature  (was: Sub-task)

> develop MySQLJDBCCatalog  to connect Flink with MySQL tables and ecosystem
> --
>
> Key: FLINK-15352
> URL: https://issues.apache.org/jira/browse/FLINK-15352
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16471) develop PostgresJDBCCatalog

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16471:
-
Component/s: Connectors / JDBC

> develop PostgresJDBCCatalog
> ---
>
> Key: FLINK-16471
> URL: https://issues.apache.org/jira/browse/FLINK-16471
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16473) add documentation for PostgresJDBCCatalog

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16473:
-
Component/s: Connectors / JDBC

> add documentation for PostgresJDBCCatalog
> -
>
> Key: FLINK-16473
> URL: https://issues.apache.org/jira/browse/FLINK-16473
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Documentation
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16472) support precision of timestamp and time data types

2020-03-06 Thread Bowen Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-16472:
-
Component/s: Connectors / JDBC

> support precision of timestamp and time data types
> --
>
> Key: FLINK-16472
> URL: https://issues.apache.org/jira/browse/FLINK-16472
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16448) add documentation for Hive table sink parallelism setting strategy

2020-03-05 Thread Bowen Li (Jira)
Bowen Li created FLINK-16448:


 Summary: add documentation for Hive table sink parallelism setting 
strategy
 Key: FLINK-16448
 URL: https://issues.apache.org/jira/browse/FLINK-16448
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: Bowen Li
Assignee: Jingsong Lee
 Fix For: 1.11.0


per user-zh mailing list question, would be beneficial to add documentation for 
Hive table sink parallelism setting strategy



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2020-03-04 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051577#comment-17051577
 ] 

Bowen Li commented on FLINK-11143:
--

another thread that a user reports this error: 
[http://mail-archives.apache.org/mod_mbox/flink-user/201808.mbox/%3CCAAUKVn6_qnixP-=sbiuLdYxCuzMut0FouCZhOVU1=uj+axi...@mail.gmail.com%3E]

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2, 1.10.0
>Reporter: Alex Vinnik
>Priority: Critical
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
> at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:603)
> at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
> at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
> at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
> at java.lang.Thread.run(Thread.java:748)\nCaused by: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)\n\t...
>  9 

[jira] [Comment Edited] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2020-03-04 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051574#comment-17051574
 ] 

Bowen Li edited comment on FLINK-11143 at 3/4/20, 8:02 PM:
---

I tried on 1.10 and 1.11, and can confirm the issue still exists by running a 
local yarn cluster.

 

[~trohrmann] [~chesnay]  can you help to fix or triage this bug?


was (Author: phoenixjiangnan):
I tried on 1.10 and 1.11, and can confirm the issue still exists.

 

[~trohrmann] [~chesnay]  can you help to fix or triage this bug?

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2, 1.10.0
>Reporter: Alex Vinnik
>Priority: Critical
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
> at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:603)
> at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
> at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
> at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
> at java.lang.Thread.run(Thread.java:748)\nCaused by: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> 

[jira] [Commented] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2020-03-04 Thread Bowen Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051576#comment-17051576
 ] 

Bowen Li commented on FLINK-11143:
--

increase the priority to 'critical' to draw more attention

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2, 1.10.0
>Reporter: Alex Vinnik
>Priority: Critical
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
> at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:603)
> at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
> at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
> at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
> at java.lang.Thread.run(Thread.java:748)\nCaused by: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)\n\t...
>  9 more\n\nEnd of exception on server side>"]}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   9   10   >