[jira] [Commented] (FLINK-33594) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-11-20 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17787915#comment-17787915
 ] 

xy commented on FLINK-33594:


if user want to use many timezone in one sql with TO_TIMESTAMP,they can not 
choose timezone for it. In a global view,they would use udf for it. this is not 
friendly to platform developers. Add the timezone arguments can aviod it and 
not bring any risk. [@MartijnVisser|https://github.com/MartijnVisser] as I know 
many users occur the problem which confused them for a long time

> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-33594
> URL: https://issues.apache.org/jira/browse/FLINK-33594
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.8.4
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33594) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-11-19 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy updated FLINK-33594:
---
Description: 
Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now only 
use utc timezone,but many scenarios we need timzone to choose,so need a pr to 
support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')

this scenario in presto,starrocks,trino:

as presto,trino,starrocks:
SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
2012-10-30 18:00:00.000 America/Los_Angeles

so we maybe need this function in to_timestamps

> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-33594
> URL: https://issues.apache.org/jira/browse/FLINK-33594
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.8.4
>Reporter: xy
>Priority: Major
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33594) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-11-19 Thread xy (Jira)
xy created FLINK-33594:
--

 Summary: Support BuiltInMethod TO_TIMESTAMP with timezone options
 Key: FLINK-33594
 URL: https://issues.apache.org/jira/browse/FLINK-33594
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.8.4
Reporter: xy






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32749) Sql gateway supports default catalog loaded by CatalogStore

2023-09-03 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17761578#comment-17761578
 ] 

xy commented on FLINK-32749:


[~zjureel]  can i get your wechar id,some problem want to talk with you, best 
wishes

> Sql gateway supports default catalog loaded by CatalogStore
> ---
>
> Key: FLINK-32749
> URL: https://issues.apache.org/jira/browse/FLINK-32749
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Gateway
>Affects Versions: 1.19.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Currently sql gateway will create memory catalog as default catalog, it 
> should support default catalog loaded by catalog store



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-21 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17756690#comment-17756690
 ] 

xy edited comment on FLINK-32871 at 8/21/23 6:18 AM:
-

engines I mean some engines in bigdata scope,such as presto、trino、starrocks and 
so on,these engine had function to compute timestamp but name is not 
to_timestamp,and these engines support timezone argument. Oracle/SQL Server may 
not used in bigdata scope,so i add the function with it,otherwise user can only 
use udf for it,and it would increase maintenance cost [~libenchao] 


was (Author: xuzifu):
engines I mean some engines in bigdata scope,such as presto、trino、starrocks and 
so on,these engine had function to compute timestamp but name is not 
to_timestamp,and support timezone argument. Oracle/SQL Server may not used in 
bigdata scope,so i add the function with it,otherwise user can only use udf for 
it,and it would increase maintenance cost [~libenchao] 

> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-32871
> URL: https://issues.apache.org/jira/browse/FLINK-32871
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-21 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17756690#comment-17756690
 ] 

xy edited comment on FLINK-32871 at 8/21/23 6:16 AM:
-

engines I mean some engines in bigdata scope,such as presto、trino、starrocks and 
so on,these engine had function to compute timestamp but name is not 
to_timestamp,and support timezone argument. Oracle/SQL Server may not used in 
bigdata scope,so i add the function with it,otherwise user can only use udf for 
it,and it would increase maintenance cost [~libenchao] 


was (Author: xuzifu):
engines I mean some engines in bigdata scope,such as presto、trino、starrocks and 
so on,these engine had function to compute timestamp but name is not 
to_timestamp,and support timezone argument. Oracle/SQL Server is not used in 
bigdata scope,so i add the function with it,otherwise user can only use udf for 
it,and it would increase maintenance cost [~libenchao] 

> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-32871
> URL: https://issues.apache.org/jira/browse/FLINK-32871
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-21 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17756690#comment-17756690
 ] 

xy commented on FLINK-32871:


engines I mean some engines in bigdata scope,such as presto、trino、starrocks and 
so on,these engine had function to compute timestamp but name is not 
to_timestamp,and support timezone argument. Oracle/SQL Server is not used in 
bigdata scope,so i add the function with it,otherwise user can only use udf for 
it,and it would increase maintenance cost [~libenchao] 

> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-32871
> URL: https://issues.apache.org/jira/browse/FLINK-32871
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-20 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17756663#comment-17756663
 ] 

xy commented on FLINK-32871:


other engines have no functions named to_timestamps,but have function like 
to_timestamps in flink,so we support the function to be compatible [~libenchao] 

> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-32871
> URL: https://issues.apache.org/jira/browse/FLINK-32871
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-20 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy updated FLINK-32871:
---
Description: 
Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now only 
use utc timezone,but many scenarios we need timzone to choose,so need a pr to 
support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')

this scenario in presto,starrocks,trino:

as presto,trino,starrocks:
SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
2012-10-30 18:00:00.000 America/Los_Angeles

so we maybe need this function in to_timestamps

  was:
Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMP now only 
use utc timezone,but many scenarios we need timzone to choose,so need a pr to 
support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')

this scenario in presto,starrocks,trino:

as presto,trino,starrocks:
SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
2012-10-30 18:00:00.000 America/Los_Angeles

so we maybe need this function in to_timestamps


> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-32871
> URL: https://issues.apache.org/jira/browse/FLINK-32871
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMPS now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-20 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy updated FLINK-32871:
---
Description: 
Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMP now only 
use utc timezone,but many scenarios we need timzone to choose,so need a pr to 
support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')

this scenario in presto,starrocks,trino:

as presto,trino,starrocks:
SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
2012-10-30 18:00:00.000 America/Los_Angeles

so we maybe need this function in to_timestamps

  was:Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMP now 
only use utc timezone,but many scenarios we need timzone to choose,so need a pr 
to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')


> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-32871
> URL: https://issues.apache.org/jira/browse/FLINK-32871
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMP now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')
> this scenario in presto,starrocks,trino:
> as presto,trino,starrocks:
> SELECT timestamp '2012-10-31 01:00 UTC' AT TIME ZONE 'America/Los_Angeles';
> 2012-10-30 18:00:00.000 America/Los_Angeles
> so we maybe need this function in to_timestamps



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-15 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17754601#comment-17754601
 ] 

xy commented on FLINK-32871:


cc [~twalthr] have a review please

> Support BuiltInMethod TO_TIMESTAMP with timezone options
> 
>
> Key: FLINK-32871
> URL: https://issues.apache.org/jira/browse/FLINK-32871
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMP now 
> only use utc timezone,but many scenarios we need timzone to choose,so need a 
> pr to support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32871) Support BuiltInMethod TO_TIMESTAMP with timezone options

2023-08-15 Thread xy (Jira)
xy created FLINK-32871:
--

 Summary: Support BuiltInMethod TO_TIMESTAMP with timezone options
 Key: FLINK-32871
 URL: https://issues.apache.org/jira/browse/FLINK-32871
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: xy
 Fix For: 1.19.0


Support BuiltInMethod TO_TIMESTAMP with timezone options,TO_TIMESTAMP now only 
use utc timezone,but many scenarios we need timzone to choose,so need a pr to 
support it as TO_TIMESTAMP('2023-08-10', '-MM-dd', 'Asia/Shanghai')



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32750) fix resouces not fix in testcase

2023-08-07 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17751575#comment-17751575
 ] 

xy commented on FLINK-32750:


[~mapohl]   pr had open again

> fix resouces not fix in testcase
> 
>
> Key: FLINK-32750
> URL: https://issues.apache.org/jira/browse/FLINK-32750
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> fix resouces not fix in testcase, in some test case did not close resource in 
> right way,this can cause connection leak in extreme scenarios. so need make a 
> pr to fix it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32750) fix resouces not fix in testcase

2023-08-07 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy updated FLINK-32750:
---
Description: fix resouces not fix in testcase, in some test case did not 
close resource in right way,this can cause connection leak in extreme 
scenarios. so need make a pr to fix it  (was: fix resouces not fix in testcase)

> fix resouces not fix in testcase
> 
>
> Key: FLINK-32750
> URL: https://issues.apache.org/jira/browse/FLINK-32750
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> fix resouces not fix in testcase, in some test case did not close resource in 
> right way,this can cause connection leak in extreme scenarios. so need make a 
> pr to fix it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32750) fix resouces not fix in testcase

2023-08-07 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17751536#comment-17751536
 ] 

xy commented on FLINK-32750:


[~mapohl]  ok, i would describe more details for it,and you can remove the 
{{fixVersion}} in the mean time,thanks

> fix resouces not fix in testcase
> 
>
> Key: FLINK-32750
> URL: https://issues.apache.org/jira/browse/FLINK-32750
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> fix resouces not fix in testcase



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32750) fix resouces not fix in testcase

2023-08-04 Thread xy (Jira)
xy created FLINK-32750:
--

 Summary: fix resouces not fix in testcase
 Key: FLINK-32750
 URL: https://issues.apache.org/jira/browse/FLINK-32750
 Project: Flink
  Issue Type: Bug
  Components: Tests
Reporter: xy
 Fix For: 1.19.0


fix resouces not fix in testcase



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32748) WriteSinkFunction::cleanFile need close write automaticly

2023-08-03 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy closed FLINK-32748.
--
Resolution: Incomplete

> WriteSinkFunction::cleanFile need close write automaticly
> -
>
> Key: FLINK-32748
> URL: https://issues.apache.org/jira/browse/FLINK-32748
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.4
>
>
> WriteSinkFunction::cleanFile need close write automaticly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32748) WriteSinkFunction::cleanFile need close write automaticly

2023-08-03 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy updated FLINK-32748:
---
Component/s: API / Core

> WriteSinkFunction::cleanFile need close write automaticly
> -
>
> Key: FLINK-32748
> URL: https://issues.apache.org/jira/browse/FLINK-32748
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.4
>
>
> WriteSinkFunction::cleanFile need close write automaticly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32748) WriteSinkFunction::cleanFile need close write automaticly

2023-08-03 Thread xy (Jira)
xy created FLINK-32748:
--

 Summary: WriteSinkFunction::cleanFile need close write automaticly
 Key: FLINK-32748
 URL: https://issues.apache.org/jira/browse/FLINK-32748
 Project: Flink
  Issue Type: Bug
Reporter: xy
 Fix For: 1.9.4


WriteSinkFunction::cleanFile need close write automaticly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-21250) Failure to finalize checkpoint due to org.apache.hadoop.fs.FileAlreadyExistsException: /user/flink/checkpoints/9e36ed4ec2f6685f836e5ee5395f5f2e/chk-11096/_metadata fo

2023-02-23 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17692580#comment-17692580
 ] 

xy commented on FLINK-21250:


[~jiangjiguang0719] can you tell me the detail of your problem,we can 
communicate with vchat

>  Failure to finalize checkpoint due to 
> org.apache.hadoop.fs.FileAlreadyExistsException: 
> /user/flink/checkpoints/9e36ed4ec2f6685f836e5ee5395f5f2e/chk-11096/_metadata 
> for client xxx.xx.xx.xxx already exists
> 
>
> Key: FLINK-21250
> URL: https://issues.apache.org/jira/browse/FLINK-21250
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.10.1
>Reporter: Fangliang Liu
>Priority: Major
> Attachments: image-2021-02-03-19-39-19-611.png, 
> image-2021-02-04-11-33-09-549.png
>
>
> Flink Version :1.10.1
> The following exception will occasionally be thrown when the flink job is 
> running on yarn. Checkpoint will always fail after the first exception is 
> thrown.
> {code:java}
> level:WARNlocation:org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$4(SchedulerBase.java:796)log:2021-02-03
>  15:39:03,447 bigbase_kafka_to_hive_push_push_user_date_active WARN  
> org.apache.flink.runtime.jobmaster.JobMaster  - Error while 
> processing checkpoint acknowledgement message
> message:Error while processing checkpoint acknowledgement 
> messagethread:jobmanager-future-thread-34throwable:org.apache.flink.runtime.checkpoint.CheckpointException:
>  Could not finalize the pending checkpoint 11096. Failure reason: Failure to 
> finalize checkpoint.
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:863)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:781)
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$4(SchedulerBase.java:794)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: 
> /user/flink/checkpoints/140ba889671225dea822a4e1f569379a/chk-11096/_metadata 
> for client 172.xx.xx.xxx already exists
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:3021)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2908)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2792)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:615)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:117)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:413)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2272)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 

[jira] [Updated] (FLINK-31106) Skip history server archiving for suspended jobs on JsonResponseHistoryServerArchivist

2023-02-21 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy updated FLINK-31106:
---
Description: 
JsonResponseHistoryServerArchivist would archivist data on FileSystem when 
TerminalState is not GLOBALLY. cause FileAlreadyExistsException like 
https://issues.apache.org/jira/browse/FLINK-24232

exception as:

INFO org.apache.flink.runtime.dispatcher.MiniDispatcher [] - Could not archive 
completed job 
ctdb_dw_push_ivideo_send_link_monitor_hi_prd(70f90a6c7bb2490d203f6c0d1818708d) 
to the history server. java.util.concurrent.CompletionException: 
java.lang.RuntimeException: org.apache.hadoop.fs.FileAlreadyExistsException: 
/flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
xxx.xxx.xxx.xxx already exists at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
 at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
 ~[?:1.8.0_192] at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
 [?:1.8.0_192] at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1629)
 [?:1.8.0_192] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_192] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_192] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192] Caused by: 
java.lang.RuntimeException: org.apache.hadoop.fs.FileAlreadyExistsException: 
/flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 10.194.100.17 
already exists at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
 at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:316) 
~[flink-dist_2.11-1.13.2.vivo-SNAPSHOT.jar:1.13.2.vivo-SNAPSHOT] at 
org.apache.flink.util.function.ThrowingRunnable.lambda$unchecked$0(ThrowingRunnable.java:51)
 ~[flink-dist_2.11-1.13.2.vivo-SNAPSHOT.jar:1.13.2.vivo-SNAPSHOT] at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
 ~[?:1.8.0_192] ... 3 more Caused by: 
org.apache.hadoop.fs.FileAlreadyExistsException: 
/flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
xxx.xxx.xxx.xxx already exists at 

[jira] [Commented] (FLINK-31106) Skip history server archiving for suspended jobs on JsonResponseHistoryServerArchivist

2023-02-16 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17689784#comment-17689784
 ] 

xy commented on FLINK-31106:


ok,i got it. in title i had notice FLINK-24232

and now comfirm it,thanks [~chesnay] 

> Skip history server archiving for suspended jobs on 
> JsonResponseHistoryServerArchivist
> --
>
> Key: FLINK-31106
> URL: https://issues.apache.org/jira/browse/FLINK-31106
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.5, 1.15.1, 1.16.1
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> JsonResponseHistoryServerArchivist would archivist data on FileSystem when 
> TerminalState is not GLOBALLY. cause FileAlreadyExistsException like 
> https://issues.apache.org/jira/browse/FLINK-24232
> exception as:
> INFO org.apache.flink.runtime.dispatcher.MiniDispatcher [] - Could not 
> archive completed job 
> ctdb_dw_push_ivideo_send_link_monitor_hi_prd(70f90a6c7bb2490d203f6c0d1818708d)
>  to the history server. java.util.concurrent.CompletionException: 
> java.lang.RuntimeException: org.apache.hadoop.fs.FileAlreadyExistsException: 
> /flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
> 10.194.100.17 already exists at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
>  at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
>  ~[?:1.8.0_192] at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
>  [?:1.8.0_192] at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1629)
>  [?:1.8.0_192] at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_192] at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_192] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192] Caused 
> by: java.lang.RuntimeException: 
> org.apache.hadoop.fs.FileAlreadyExistsException: 
> /flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
> 10.194.100.17 already exists at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
>  at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> 

[jira] (FLINK-31106) Skip history server archiving for suspended jobs on JsonResponseHistoryServerArchivist

2023-02-16 Thread xy (Jira)


[ https://issues.apache.org/jira/browse/FLINK-31106 ]


xy deleted comment on FLINK-31106:


was (Author: xuzifu):
[~chesnay] but this error would be in flink on yarn, method stack is from 
JsonResponseHistoryServerArchivist::archiveExecutionGraph,the detail is in my 
error log

> Skip history server archiving for suspended jobs on 
> JsonResponseHistoryServerArchivist
> --
>
> Key: FLINK-31106
> URL: https://issues.apache.org/jira/browse/FLINK-31106
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.5, 1.15.1, 1.16.1
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> JsonResponseHistoryServerArchivist would archivist data on FileSystem when 
> TerminalState is not GLOBALLY. cause FileAlreadyExistsException like 
> https://issues.apache.org/jira/browse/FLINK-24232
> exception as:
> INFO org.apache.flink.runtime.dispatcher.MiniDispatcher [] - Could not 
> archive completed job 
> ctdb_dw_push_ivideo_send_link_monitor_hi_prd(70f90a6c7bb2490d203f6c0d1818708d)
>  to the history server. java.util.concurrent.CompletionException: 
> java.lang.RuntimeException: org.apache.hadoop.fs.FileAlreadyExistsException: 
> /flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
> 10.194.100.17 already exists at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
>  at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
>  ~[?:1.8.0_192] at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
>  [?:1.8.0_192] at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1629)
>  [?:1.8.0_192] at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_192] at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_192] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192] Caused 
> by: java.lang.RuntimeException: 
> org.apache.hadoop.fs.FileAlreadyExistsException: 
> /flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
> 10.194.100.17 already exists at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
>  at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> 

[jira] [Commented] (FLINK-31106) Skip history server archiving for suspended jobs on JsonResponseHistoryServerArchivist

2023-02-16 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17689781#comment-17689781
 ] 

xy commented on FLINK-31106:


[~chesnay] but this error would be in flink on yarn, method stack is from 
JsonResponseHistoryServerArchivist::archiveExecutionGraph,the detail is in my 
error log

> Skip history server archiving for suspended jobs on 
> JsonResponseHistoryServerArchivist
> --
>
> Key: FLINK-31106
> URL: https://issues.apache.org/jira/browse/FLINK-31106
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.14.5, 1.15.1, 1.16.1
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> JsonResponseHistoryServerArchivist would archivist data on FileSystem when 
> TerminalState is not GLOBALLY. cause FileAlreadyExistsException like 
> https://issues.apache.org/jira/browse/FLINK-24232
> exception as:
> INFO org.apache.flink.runtime.dispatcher.MiniDispatcher [] - Could not 
> archive completed job 
> ctdb_dw_push_ivideo_send_link_monitor_hi_prd(70f90a6c7bb2490d203f6c0d1818708d)
>  to the history server. java.util.concurrent.CompletionException: 
> java.lang.RuntimeException: org.apache.hadoop.fs.FileAlreadyExistsException: 
> /flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
> 10.194.100.17 already exists at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
>  at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
>  ~[?:1.8.0_192] at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
>  [?:1.8.0_192] at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1629)
>  [?:1.8.0_192] at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_192] at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_192] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192] Caused 
> by: java.lang.RuntimeException: 
> org.apache.hadoop.fs.FileAlreadyExistsException: 
> /flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 
> 10.194.100.17 already exists at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
>  at 
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> 

[jira] [Updated] (FLINK-31106) Skip history server archiving for suspended jobs on JsonResponseHistoryServerArchivist

2023-02-16 Thread xy (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xy updated FLINK-31106:
---
Description: 
JsonResponseHistoryServerArchivist would archivist data on FileSystem when 
TerminalState is not GLOBALLY. cause FileAlreadyExistsException like 
https://issues.apache.org/jira/browse/FLINK-24232

exception as:

INFO org.apache.flink.runtime.dispatcher.MiniDispatcher [] - Could not archive 
completed job 
ctdb_dw_push_ivideo_send_link_monitor_hi_prd(70f90a6c7bb2490d203f6c0d1818708d) 
to the history server. java.util.concurrent.CompletionException: 
java.lang.RuntimeException: org.apache.hadoop.fs.FileAlreadyExistsException: 
/flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 10.194.100.17 
already exists at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
 at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
 ~[?:1.8.0_192] at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
 [?:1.8.0_192] at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1629)
 [?:1.8.0_192] at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_192] at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_192] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192] Caused by: 
java.lang.RuntimeException: org.apache.hadoop.fs.FileAlreadyExistsException: 
/flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 10.194.100.17 
already exists at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2967)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2856)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2741)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:620)
 at 
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:115)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2281) at 
org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2277) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275) at 
org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:316) 
~[flink-dist_2.11-1.13.2.vivo-SNAPSHOT.jar:1.13.2.vivo-SNAPSHOT] at 
org.apache.flink.util.function.ThrowingRunnable.lambda$unchecked$0(ThrowingRunnable.java:51)
 ~[flink-dist_2.11-1.13.2.vivo-SNAPSHOT.jar:1.13.2.vivo-SNAPSHOT] at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
 ~[?:1.8.0_192] ... 3 more Caused by: 
org.apache.hadoop.fs.FileAlreadyExistsException: 
/flink/completed-jobs/70f90a6c7bb2490d203f6c0d1818708d for client 10.194.100.17 
already exists at 

[jira] [Created] (FLINK-31106) Skip history server archiving for suspended jobs on JsonResponseHistoryServerArchivist

2023-02-16 Thread xy (Jira)
xy created FLINK-31106:
--

 Summary: Skip history server archiving for suspended jobs on 
JsonResponseHistoryServerArchivist
 Key: FLINK-31106
 URL: https://issues.apache.org/jira/browse/FLINK-31106
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.16.1, 1.15.1, 1.14.5
Reporter: xy
 Fix For: 1.7.3


JsonResponseHistoryServerArchivist would archivist data on FileSystem when 
TerminalState is not GLOBALLY. cause FileAlreadyExistsException like 
https://issues.apache.org/jira/browse/FLINK-24232



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31035) add warn info to user when NoNodeException happend

2023-02-13 Thread xy (Jira)
xy created FLINK-31035:
--

 Summary: add warn info to user when NoNodeException happend
 Key: FLINK-31035
 URL: https://issues.apache.org/jira/browse/FLINK-31035
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Queryable State
Affects Versions: 1.8.4
Reporter: xy


when KeeperException.NoNodeException happens in 
ZooKeeperStateHandleStore::getAllHandles,we need logs to retrace



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30938) Release Testing: Verify FLINK-29766 Adaptive Batch Scheduler should also work with hybrid shuffle mode

2023-02-08 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17685853#comment-17685853
 ] 

xy commented on FLINK-30938:


ok,thanks  [~Weijie Guo] 

> Release Testing: Verify FLINK-29766 Adaptive Batch Scheduler should also work 
> with hybrid shuffle mode
> --
>
> Key: FLINK-30938
> URL: https://issues.apache.org/jira/browse/FLINK-30938
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0
>Reporter: Weijie Guo
>Assignee: xy
>Priority: Blocker
>  Labels: release-testing
> Attachments: testAdaptiveBatchJob, testSpeculativeExecution
>
>
> The document has not been completed, this testing work should start after 
> FLINK-30860 is completed.
> This ticket aims for verifying FLINK-29766: Adaptive Batch Scheduler should 
> also work with hybrid shuffle mode.
> More details about this feature and how to use it can be found in this 
> [documentation|xxx].
> The verification is divided into two parts:
> Part I: Verify hybrid shuffle can work with AdaptiveBatchScheduler
> Write a simple Flink batch job using hybrid shuffle mode and submit this job. 
> Note that in flink-1.17, AdaptiveBatchScheduler is the default scheduler for 
> batch job, so you do not need other configuration.
> Suppose your job's topology like source -> map -> sink, if your cluster have 
> enough slots, you should find that source and map are running at the same 
> time.
> Part II: Verify hybrid shuffle can work with Speculative Execution
> Write a Flink batch job using hybrid shuffle mode which has a subtask running 
> much slower than others (e.g. sleep indefinitely if it runs on a certain 
> host, the hostname can be retrieved via 
> InetAddress.getLocalHost().getHostName(), or if its (subtaskIndex + 
> attemptNumer) % 2 == 0)
> Modify Flink configuration file to enable speculative execution and tune the 
> configuration as you like
> Submit the job. Checking the web UI, logs, metrics and produced result.
> You should find that once a producer task's one subtask finished, all its 
> consumer tasks can be scheduled in log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30938) Release Testing: Verify FLINK-29766 Adaptive Batch Scheduler should also work with hybrid shuffle mode

2023-02-08 Thread xy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17685785#comment-17685785
 ] 

xy commented on FLINK-30938:


i will get the Jira [~Weijie Guo] 

> Release Testing: Verify FLINK-29766 Adaptive Batch Scheduler should also work 
> with hybrid shuffle mode
> --
>
> Key: FLINK-30938
> URL: https://issues.apache.org/jira/browse/FLINK-30938
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.17.0
>Reporter: Weijie Guo
>Priority: Blocker
>  Labels: release-testing
>
> The document has not been completed, this testing work should start after 
> FLINK-30860 is completed.
> This ticket aims for verifying FLINK-29766: Adaptive Batch Scheduler should 
> also work with hybrid shuffle mode.
> More details about this feature and how to use it can be found in this 
> [documentation|xxx].
> The verification is divided into two parts:
> Part I: Verify hybrid shuffle can work with AdaptiveBatchScheduler
> Write a simple Flink batch job using hybrid shuffle mode and submit this job. 
> Note that in flink-1.17, AdaptiveBatchScheduler is the default scheduler for 
> batch job, so you do not need other configuration.
> Suppose your job's topology like source -> map -> sink, if your cluster have 
> enough slots, you should find that source and map are running at the same 
> time.
> Part II: Verify hybrid shuffle can work with Speculative Execution
> Write a Flink batch job using hybrid shuffle mode which has a subtask running 
> much slower than others (e.g. sleep indefinitely if it runs on a certain 
> host, the hostname can be retrieved via 
> InetAddress.getLocalHost().getHostName(), or if its (subtaskIndex + 
> attemptNumer) % 2 == 0)
> Modify Flink configuration file to enable speculative execution and tune the 
> configuration as you like
> Submit the job. Checking the web UI, logs, metrics and produced result.
> You should find that once a producer task's one subtask finished, all its 
> consumer tasks can be scheduled in log.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-18597) elasticsearch sql insert function cannot set documentid

2020-07-14 Thread xy (Jira)
xy created FLINK-18597:
--

 Summary: elasticsearch sql insert function cannot set documentid
 Key: FLINK-18597
 URL: https://issues.apache.org/jira/browse/FLINK-18597
 Project: Flink
  Issue Type: Test
Affects Versions: 1.11.0
Reporter: xy






--
This message was sent by Atlassian Jira
(v8.3.4#803005)