[jira] [Commented] (FLINK-18356) Exit code 137 returned from process

2022-01-02 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17467835#comment-17467835
 ] 

Martijn Visser commented on FLINK-18356:


[~gaoyunhaii] [~trohrmann] I've understood that {{test_cron_azure table}} 
passed on local CI when [~twalthr] reverted the commit from 
https://issues.apache.org/jira/browse/FLINK-25085

> Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Piotr Nowojski
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25502) eval method of Flink ScalerFunction only run one time

2022-01-02 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-25502.

Resolution: Invalid

Please take a look at: 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/functions/udfs/#determinism

{code}
If a function is called with constant expressions or constant expressions can 
be derived from the given statement, a function is pre-evaluated for constant 
expression reduction and might not be executed on the cluster anymore. Unless 
isDeterministic() is used to disable constant expression reduction in this 
case. For example, the following calls to ABS are executed during planning: 
SELECT ABS(-1) FROM t and SELECT ABS(field) FROM t WHERE field = -1; whereas 
SELECT ABS(field) FROM t is not.
{code}

> eval method of Flink ScalerFunction only run one time
> -
>
> Key: FLINK-25502
> URL: https://issues.apache.org/jira/browse/FLINK-25502
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.2
>Reporter: Spongebob
>Priority: Major
>
> assume that there is one scalerFunction named `id` which's eval method takes 
> no arguments and return increasing int value on each calling. Now I found 
> that when I call `id()` function in  FlinkSQL that has 3 rows , the eval 
> method only was called one time so I got the same id value for each row. The 
> sql likes 'SELECT f0, id() FROM T'.
> So I decided to define one argument on `eval` method.  When I execute sql 
> 'SELECT f0, id(1) FROM T' I got the same id value still. But when I execute 
> sql 'SELECT f0, id(f0) FROM T' then I could get the correct id value, because 
> the eval method was called by three times now.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   * 8a20c4e1df2c0ef6349839ccee32f08250d6aa00 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MartijnVisser commented on pull request #18225: [hotfix] [javadocs] Fix typo in org.apache.flink.sql.parser.ddl.SqlCr…

2022-01-02 Thread GitBox


MartijnVisser commented on pull request #18225:
URL: https://github.com/apache/flink/pull/18225#issuecomment-1003916601


   @chenxyz707 Thanks for that. Can you squash the commits on your end?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   * 8a20c4e1df2c0ef6349839ccee32f08250d6aa00 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   * 8a20c4e1df2c0ef6349839ccee32f08250d6aa00 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MartijnVisser commented on pull request #17657: [FLINK-24745][format][json] Add support for Oracle OGG JSON format parser

2022-01-02 Thread GitBox


MartijnVisser commented on pull request #17657:
URL: https://github.com/apache/flink/pull/17657#issuecomment-1003912467


   @leonardBang Will you do a review for this one?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MartijnVisser commented on pull request #17332: [FLINK-24349]Support customized Calalogs via JDBC

2022-01-02 Thread GitBox


MartijnVisser commented on pull request #17332:
URL: https://github.com/apache/flink/pull/17332#issuecomment-1003912249


   @cuibo01 Is there any progress to report from your end?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18253: FLINK-24900

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18253:
URL: https://github.com/apache/flink/pull/18253#issuecomment-1003855117


   
   ## CI report:
   
   * c95b6c37b1ff16f0830d8727f3ef26b9899e82a4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28847)
 
   * f0e2befa0c6ac996284f0990c0947a90c7980a1a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28850)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * 78f42f85cdf56d2c8d1de5d704a1c8da3eea8e6d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28846)
 
   * b4c3052591c73e0fec99aec3c11070d9fae26561 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28849)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-9966) Add a ORC table factory

2022-01-02 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17467821#comment-17467821
 ] 

Martijn Visser commented on FLINK-9966:
---

[~lzljs3620320] Do you think we can close this ticket?

> Add a ORC table factory
> ---
>
> Key: FLINK-9966
> URL: https://issues.apache.org/jira/browse/FLINK-9966
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Ecosystem
>Reporter: Timo Walther
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> We should allow to define an {{OrcTableSource}} using a table factory. How we 
> split connector and format is up for discussion. An ORC format might also be 
> necessary for the new streaming file sink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-20823) Update documentation to mention Table/SQL API doesn't provide cross-major-version state compatibility

2022-01-02 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser resolved FLINK-20823.

Resolution: Fixed

This has been resolved, see 
https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/concepts/overview/#stateful-upgrades-and-evolution

> Update documentation to mention Table/SQL API doesn't provide 
> cross-major-version state compatibility
> -
>
> Key: FLINK-20823
> URL: https://issues.apache.org/jira/browse/FLINK-20823
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> As discussed in the mailing list: 
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Did-Flink-1-11-break-backwards-compatibility-for-the-table-environment-tp47472p47492.html
> Flink Table/SQL API doesn't provide cross-major-version state compatibility, 
> however, this is not documented in anywhere. We should update the 
> documentation. Besides, we should also mention that we provide state 
> compatibility across minor versions. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18253: FLINK-24900

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18253:
URL: https://github.com/apache/flink/pull/18253#issuecomment-1003855117


   
   ## CI report:
   
   * c95b6c37b1ff16f0830d8727f3ef26b9899e82a4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28847)
 
   * f0e2befa0c6ac996284f0990c0947a90c7980a1a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * 78f42f85cdf56d2c8d1de5d704a1c8da3eea8e6d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28846)
 
   * b4c3052591c73e0fec99aec3c11070d9fae26561 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20569) testKafkaSourceSinkWithMetadata hangs

2022-01-02 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-20569.
--
Resolution: Fixed

Closing this ticket because no new issues have been reported in a long time 
anymore

> testKafkaSourceSinkWithMetadata hangs
> -
>
> Key: FLINK-20569
> URL: https://issues.apache.org/jira/browse/FLINK-20569
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Huang Xingbo
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor, 
> test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10781=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b]
> {code:java}
> 2020-12-10T23:10:46.7788275Z Test testKafkaSourceSinkWithMetadata[legacy = 
> false, format = 
> csv](org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase) is 
> running.
> 2020-12-10T23:10:46.7789360Z 
> 
> 2020-12-10T23:10:46.7790602Z 23:10:46,776 [main] INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl [] - 
> Creating topic metadata_topic_csv
> 2020-12-10T23:10:47.1145296Z 23:10:47,112 [main] WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Property 
> [transaction.timeout.ms] not specified. Setting it to 360 ms
> 2020-12-10T23:10:47.1683896Z 23:10:47,166 [Sink: 
> Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
> physical_2, physical_3, headers, timestamp]) (1/1)#0] WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Using 
> AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE 
> semantic.
> 2020-12-10T23:10:47.2087733Z 23:10:47,206 [Sink: 
> Sink(table=[default_catalog.default_database.kafka], fields=[physical_1, 
> physical_2, physical_3, headers, timestamp]) (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer [] - Starting 
> FlinkKafkaInternalProducer (1/1) to produce into default topic 
> metadata_topic_csv
> 2020-12-10T23:10:47.5157133Z 23:10:47,513 [Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 has no restore state.
> 2020-12-10T23:10:47.5233388Z 23:10:47,521 [Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 will start reading the following 1 partitions from the 
> earliest offsets: [KafkaTopicPartition{topic='metadata_topic_csv', 
> partition=0}]
> 2020-12-10T23:10:47.5387239Z 23:10:47,537 [Legacy Source Thread - Source: 
> TableSourceScan(table=[[default_catalog, default_database, kafka]], 
> fields=[physical_1, physical_2, physical_3, topic, partition, headers, 
> leader-epoch, timestamp, timestamp-type]) -> Calc(select=[physical_1, 
> physical_2, CAST(timestamp-type) AS timestamp-type, CAST(timestamp) AS 
> timestamp, leader-epoch, CAST(headers) AS headers, CAST(partition) AS 
> partition, CAST(topic) AS topic, physical_3]) -> SinkConversionToTuple2 -> 
> Sink: Select table sink (1/1)#0] INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase [] - 
> Consumer subtask 0 creating fetcher with offsets 
> {KafkaTopicPartition{topic='metadata_topic_csv', partition=0}=-915623761775}.
> 2020-12-11T02:34:02.6860452Z ##[error]The operation was canceled.
> {code}
> This test started at 2020-12-10T23:10:46.7788275Z and has not been finished 
> at 2020-12-11T02:34:02.6860452Z



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25491) Code generation: init method exceeds 64 KB when there is a large IN filter with Table API

2022-01-02 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-25491:
--

Assignee: Caizhi Weng

> Code generation: init method exceeds 64 KB when there is a large IN filter 
> with Table API
> -
>
> Key: FLINK-25491
> URL: https://issues.apache.org/jira/browse/FLINK-25491
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Runtime
>Affects Versions: 1.14.2
>Reporter: Daniel Cheng
>Assignee: Caizhi Weng
>Priority: Major
>
> When using Table API (Blink planner), if you are filtering using an IN filter 
> with a lot of values, e.g. {{$(colName).in()}}, it will result 
> in the error
>  
> {{Code of method "(...)V" of class "BatchExecCal$3006" grows beyond 64 
> KB}}
>  
> The size of the init method mainly comes from the below method, which 
> initializes the hash set with all the values in the filter.
> addReusableHashSet in CodeGeneratorContext.scala
> [https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala#L409]
>  
> This affects older versions as well, with 1.14.2 being the latest version 
> that exhibits this issue.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-8474) Add documentation for HBaseTableSource

2022-01-02 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser resolved FLINK-8474.
---
Resolution: Fixed

This has been resolved in the meantime

> Add documentation for HBaseTableSource
> --
>
> Key: FLINK-8474
> URL: https://issues.apache.org/jira/browse/FLINK-8474
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / Ecosystem
>Affects Versions: 1.3.0, 1.4.0, 1.5.0
>Reporter: Fabian Hueske
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> The {{HBaseTableSource}} is not documented in the [Table Source and Sinks 
> documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/table/sourceSinks.html].



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-18717) reuse MiniCluster in table integration test class ?

2022-01-02 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17467810#comment-17467810
 ] 

Martijn Visser commented on FLINK-18717:


[~godfreyhe] Is still is a valid issue? 

> reuse MiniCluster in table integration test class ? 
> 
>
> Key: FLINK-18717
> URL: https://issues.apache.org/jira/browse/FLINK-18717
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: godfrey he
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> before 1.11, {{MiniCluster}} can be reused in each integration test class. 
> (see TestStreamEnvironment#setAsContext) 
> In 1.11, after we correct the execution behavior of TableEnvironment, 
> StreamTableEnvironment and BatchTableEnvironment (see 
> [FLINK-16363|https://issues.apache.org/jira/browse/FLINK-16363], 
> [FLINK-17126|https://issues.apache.org/jira/browse/FLINK-17126]), MiniCluster 
> will be created for each test case even in same test class (see 
> {{org.apache.flink.client.deployment.executors.LocalExecutor}}). It's better 
> we can reuse {{MiniCluster}} like before. One approach is we provide a new 
> kind of  MiniCluster factory (such as: SessionMiniClusterFactory) instead of 
> using  {{PerJobMiniClusterFactory}}. WDYT ?
>   



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] MartijnVisser commented on pull request #15599: [FLINK-11838][flink-gs-fs-hadoop] Create Google Storage file system with recoverable writer support

2022-01-02 Thread GitBox


MartijnVisser commented on pull request #15599:
URL: https://github.com/apache/flink/pull/15599#issuecomment-1003904759


   @galenwarren Thanks for this. It does seem like there are still some license 
issues because the build failed. Could you look into those or do you need some 
help with that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * 78f42f85cdf56d2c8d1de5d704a1c8da3eea8e6d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28846)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18254: [hotfix][tools] Extract duplicate code in JarFileChecker

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18254:
URL: https://github.com/apache/flink/pull/18254#issuecomment-1003885264


   
   ## CI report:
   
   * cfb9090f4b1a18b4994988845bf49a94d62a3f7d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28848)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18254: [hotfix][tools] Extract duplicate code in JarFileChecker

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18254:
URL: https://github.com/apache/flink/pull/18254#issuecomment-1003885264


   
   ## CI report:
   
   * cfb9090f4b1a18b4994988845bf49a94d62a3f7d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28848)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18254: [hotfix][tools] Extract duplicate code in JarFileChecker

2022-01-02 Thread GitBox


flinkbot commented on pull request #18254:
URL: https://github.com/apache/flink/pull/18254#issuecomment-1003885264


   
   ## CI report:
   
   * cfb9090f4b1a18b4994988845bf49a94d62a3f7d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18254: [hotfix][tools] Extract duplicate code in JarFileChecker

2022-01-02 Thread GitBox


flinkbot commented on pull request #18254:
URL: https://github.com/apache/flink/pull/18254#issuecomment-1003884979


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cfb9090f4b1a18b4994988845bf49a94d62a3f7d (Mon Jan 03 
06:00:37 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] ruanwenjun opened a new pull request #18254: [hotfix][tools] Extract duplicate code in JarFileChecker

2022-01-02 Thread GitBox


ruanwenjun opened a new pull request #18254:
URL: https://github.com/apache/flink/pull/18254


   ## What is the purpose of the change
   
   * This pull request aim to extract some duplicate code in JarFileChecker.
   
   
   ## Brief change log
   
 - Optimize the code in JarFileChecker.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):  no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no 
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no 
 - The S3 file system connector: no 
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xuzifu666 commented on pull request #18243: [hotfix] remove unused LOG reference

2022-01-02 Thread GitBox


xuzifu666 commented on pull request #18243:
URL: https://github.com/apache/flink/pull/18243#issuecomment-1003880081


   @dianfu  hi, please have a review, thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18253: FLINK-24900

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18253:
URL: https://github.com/apache/flink/pull/18253#issuecomment-1003855117


   
   ## CI report:
   
   * c95b6c37b1ff16f0830d8727f3ef26b9899e82a4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28847)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18253: FLINK-24900

2022-01-02 Thread GitBox


flinkbot commented on pull request #18253:
URL: https://github.com/apache/flink/pull/18253#issuecomment-1003855349


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit c95b6c37b1ff16f0830d8727f3ef26b9899e82a4 (Mon Jan 03 
03:55:53 UTC 2022)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24900).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #18253: FLINK-24900

2022-01-02 Thread GitBox


flinkbot commented on pull request #18253:
URL: https://github.com/apache/flink/pull/18253#issuecomment-1003855117


   
   ## CI report:
   
   * c95b6c37b1ff16f0830d8727f3ef26b9899e82a4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24900) Support to run multiple shuffle plugins in one session cluster

2022-01-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24900:
---
Labels: pull-request-available  (was: )

> Support to run multiple shuffle plugins in one session cluster
> --
>
> Key: FLINK-24900
> URL: https://issues.apache.org/jira/browse/FLINK-24900
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
>
> Currently, one Flink cluster can only use one shuffle plugin. However, there 
> are cases where different jobs may need different shuffle implementations. By 
> loading shuffle plugin with the plugin manager and letting jobs select their 
> shuffle service freely, Flink can support to run multiple shuffle plugins in 
> one session cluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] wsry opened a new pull request #18253: FLINK-24900

2022-01-02 Thread GitBox


wsry opened a new pull request #18253:
URL: https://github.com/apache/flink/pull/18253


   ## What is the purpose of the change
   
   This is a draft PR for stability test of FLINK-24900.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * 9bac82aa06c27530da9386b7d0ad498ddc36647a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28840)
 
   * 78f42f85cdf56d2c8d1de5d704a1c8da3eea8e6d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28846)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * 9bac82aa06c27530da9386b7d0ad498ddc36647a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28840)
 
   * 78f42f85cdf56d2c8d1de5d704a1c8da3eea8e6d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-25499) Column 'window_start' is ambiguous

2022-01-02 Thread Jing Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17467739#comment-17467739
 ] 

Jing Zhang edited comment on FLINK-25499 at 1/3/22, 12:52 AM:
--

[~zhitom], Thanks for reporting the problem.
This is not a bug, I think.
The return value of [Window 
TVF|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-tvf/]
 is a new relation that includes all columns of original relation as well as 
additional 3 columns named “window_start”, “window_end”, “window_time” to 
indicate the assigned window. You could find more information in 
[Doc[1]|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-tvf/).

When you use cascade window tvf, there would be two window_start, which caused 
the exception with error message "Column 'window_start' is ambiguous".



was (Author: qingru zhang):
[~zhitom], Thanks for reporting the bug.
This is not a bug, I think.
The return value of [Window 
TVF|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-tvf/]
 is a new relation that includes all columns of original relation as well as 
additional 3 columns named “window_start”, “window_end”, “window_time” to 
indicate the assigned window. You could find more information in 
[Doc[1]|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-tvf/).

When you use cascade window tvf, there would be two window_start, which caused 
the exception with error message "Column 'window_start' is ambiguous".


> Column 'window_start' is ambiguous
> --
>
> Key: FLINK-25499
> URL: https://issues.apache.org/jira/browse/FLINK-25499
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.2
> Environment: Flink 1.14.0
>Reporter: Shandy
>Priority: Major
>  Labels: ambiguous, window_start
>
> *For docs: [Window 
> Aggregation|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-agg/#cascading-window-aggregation]*
> *use sql-client create view such as:*
> =
> CREATE TEMPORARY VIEW IF NOT EXISTS telemetry_r_yangchen_standard_t
> AS
> (
>     SELECT a.window_start,a.window_end,a.window_time as 
> rowTime,last_value(a.tenantId) as tenantId      
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_normal, 
> DESCRIPTOR(receiveTimeTS), INTERVAL '10' MINUTES)) as a
>     group by a.window_start, a.window_end,a.window_time
> );
> SELECT b.window_start, b.window_end,b.window_time as rowTime,sum(b.tenantId) 
> as tenantId
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_standard_t, 
> DESCRIPTOR(rowTime), INTERVAL '60' MINUTES)) as b
>     group by b.window_start, b.window_end,b.window_time;
> =
> *above select occurs error message:*
> {color:#ff}*[ERROR] Could not execute SQL statement. Reason:
> org.apache.calcite.sql.validate.SqlValidatorException: Column 'window_start' 
> is ambiguous
> *{color}
> *if modify create sql like this :*
> 
> CREATE TEMPORARY VIEW IF NOT EXISTS telemetry_r_yangchen_standard_t
> AS
> (
>     SELECT {color:#de350b}-a.windw_start,-{color}a.window_end,a.window_time 
> as rowTime,last_value(a.tenantId) as tenantId      
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_normal, 
> DESCRIPTOR(receiveTimeTS), INTERVAL '10' MINUTES)) as a
>     group by a.window_start, a.window_end,a.window_time
> );
> *or*
> CREATE TEMPORARY VIEW IF NOT EXISTS telemetry_r_yangchen_standard_t
> AS
> (
>     SELECT {color:#de350b}cast(a.window_start as timestamp) as 
> windowStart,cast(a.window_end as timestamp) as windowEnd,{color}a.window_time 
> as rowTime,last_value(a.tenantId) as tenantId      
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_normal, 
> DESCRIPTOR(receiveTimeTS), INTERVAL '10' MINUTES)) as a
>     group by a.window_start, a.window_end,a.window_time
> );
> 
> *then, above select-sql can be executed ok!*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25499) Column 'window_start' is ambiguous

2022-01-02 Thread Jing Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17467739#comment-17467739
 ] 

Jing Zhang commented on FLINK-25499:


[~zhitom], Thanks for reporting the bug.
This is not a bug, I think.
The return value of [Window 
TVF|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-tvf/]
 is a new relation that includes all columns of original relation as well as 
additional 3 columns named “window_start”, “window_end”, “window_time” to 
indicate the assigned window. You could find more information in 
[Doc[1]|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-tvf/).

When you use cascade window tvf, there would be two window_start, which caused 
the exception with error message "Column 'window_start' is ambiguous".


> Column 'window_start' is ambiguous
> --
>
> Key: FLINK-25499
> URL: https://issues.apache.org/jira/browse/FLINK-25499
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.2
> Environment: Flink 1.14.0
>Reporter: Shandy
>Priority: Major
>  Labels: ambiguous, window_start
>
> *For docs: [Window 
> Aggregation|https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/sql/queries/window-agg/#cascading-window-aggregation]*
> *use sql-client create view such as:*
> =
> CREATE TEMPORARY VIEW IF NOT EXISTS telemetry_r_yangchen_standard_t
> AS
> (
>     SELECT a.window_start,a.window_end,a.window_time as 
> rowTime,last_value(a.tenantId) as tenantId      
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_normal, 
> DESCRIPTOR(receiveTimeTS), INTERVAL '10' MINUTES)) as a
>     group by a.window_start, a.window_end,a.window_time
> );
> SELECT b.window_start, b.window_end,b.window_time as rowTime,sum(b.tenantId) 
> as tenantId
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_standard_t, 
> DESCRIPTOR(rowTime), INTERVAL '60' MINUTES)) as b
>     group by b.window_start, b.window_end,b.window_time;
> =
> *above select occurs error message:*
> {color:#ff}*[ERROR] Could not execute SQL statement. Reason:
> org.apache.calcite.sql.validate.SqlValidatorException: Column 'window_start' 
> is ambiguous
> *{color}
> *if modify create sql like this :*
> 
> CREATE TEMPORARY VIEW IF NOT EXISTS telemetry_r_yangchen_standard_t
> AS
> (
>     SELECT {color:#de350b}-a.windw_start,-{color}a.window_end,a.window_time 
> as rowTime,last_value(a.tenantId) as tenantId      
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_normal, 
> DESCRIPTOR(receiveTimeTS), INTERVAL '10' MINUTES)) as a
>     group by a.window_start, a.window_end,a.window_time
> );
> *or*
> CREATE TEMPORARY VIEW IF NOT EXISTS telemetry_r_yangchen_standard_t
> AS
> (
>     SELECT {color:#de350b}cast(a.window_start as timestamp) as 
> windowStart,cast(a.window_end as timestamp) as windowEnd,{color}a.window_time 
> as rowTime,last_value(a.tenantId) as tenantId      
>     FROM TABLE(TUMBLE(TABLE telemetry_r_yangchen_normal, 
> DESCRIPTOR(receiveTimeTS), INTERVAL '10' MINUTES)) as a
>     group by a.window_start, a.window_end,a.window_time
> );
> 
> *then, above select-sql can be executed ok!*



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #15599: [FLINK-11838][flink-gs-fs-hadoop] Create Google Storage file system with recoverable writer support

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #15599:
URL: https://github.com/apache/flink/pull/15599#issuecomment-818947931


   
   ## CI report:
   
   * 6d0cd71b8c77294cc57fca88665ba324400bd459 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28842)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15599: [FLINK-11838][flink-gs-fs-hadoop] Create Google Storage file system with recoverable writer support

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #15599:
URL: https://github.com/apache/flink/pull/15599#issuecomment-818947931


   
   ## CI report:
   
   * bf42647d6c26cab7cd58b4bc61edfc39268a7097 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28834)
 
   * 6d0cd71b8c77294cc57fca88665ba324400bd459 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28842)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15599: [FLINK-11838][flink-gs-fs-hadoop] Create Google Storage file system with recoverable writer support

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #15599:
URL: https://github.com/apache/flink/pull/15599#issuecomment-818947931


   
   ## CI report:
   
   * bf42647d6c26cab7cd58b4bc61edfc39268a7097 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28834)
 
   * 6d0cd71b8c77294cc57fca88665ba324400bd459 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-6605) Allow users to specify a default name for processing time in Table API

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6605:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Allow users to specify a default name for processing time in Table API
> --
>
> Key: FLINK-6605
> URL: https://issues.apache.org/jira/browse/FLINK-6605
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Haohui Mai
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> FLINK-5884 enables users to specify column names for both processing time and 
> event time. FLINK-6595 and FLINK-6584 breaks as chained / nested queries will 
> no longer have an attribute of processing time / event time.
> This jira proposes to add a default name for the processing time in order to 
> unbreak FLINK-6595 and FLINK-6584.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6446) Various improvements to the Web Frontend

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6446:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Various improvements to the Web Frontend
> 
>
> Key: FLINK-6446
> URL: https://issues.apache.org/jira/browse/FLINK-6446
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Stephan Ewen
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> This is the umbrella issue for various improvements to the web frontend,



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6713) Document how to allow multiple Kafka consumers / producers to authenticate using different credentials

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6713:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Document how to allow multiple Kafka consumers / producers to authenticate 
> using different credentials
> --
>
> Key: FLINK-6713
> URL: https://issues.apache.org/jira/browse/FLINK-6713
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Documentation
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> The doc improvements should include:
> 1. Clearly state that the built-in JAAS security module in Flink is a JVM 
> process-wide static JAAS file installation (all static JAAS files are, not 
> Flink specific), and therefore only allows all Kafka consumers and producers 
> in a single JVM (and therefore the whole job, since we do not allow assigning 
> operators to specific slots) to authenticate as one single user.
> 2. If Kerberos authentication is used: self-ship multiple keytab files, and 
> use Kafka's dynamic JAAS configuration through client properties to point to 
> separate keytabs for each consumer / producer. Note that ticket cache would 
> never work for multiple authentications.
> 3. If plain simple login is used: Kafka's dynamic JAAS configuration should 
> be used (and is the only way to do so).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6771) Create committer guide

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6771:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Create committer guide
> --
>
> Key: FLINK-6771
> URL: https://issues.apache.org/jira/browse/FLINK-6771
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> The Flink project currently has no dedicated committer guide. I think it 
> would be helpful for new committers to have such a document. The document 
> should explain how the most important processes work within the Flink 
> community (those not clearly stated in [1]). For example, we should outline 
> how to close PRs and what information to add to a JIRA issue when closing it. 
> We could use as a starting point [2].
> [1] https://www.apache.org/dev/new-committers-guide.html
> [2] http://flink.apache.org/contribute-code.html#how-to-use-git-as-a-committer



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6370) FileAlreadyExistsException on startup

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6370:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> FileAlreadyExistsException on startup
> -
>
> Key: FLINK-6370
> URL: https://issues.apache.org/jira/browse/FLINK-6370
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.2.0
>Reporter: Andrey
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Currently static web resources are lazily cached onto disk during first 
> request. However if 2 concurrent requests will be executed, then 
> FileAlreadyExistsException will be in logs.
> {code}
> 2017-04-24 14:00:58,075 ERROR 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler  - error 
> while responding [nioEventLoopGroup-3-2]
> java.nio.file.FileAlreadyExistsException: 
> /flink/web/flink-web-528f8cb8-dd60-433c-8f6c-df49ad0b79e0/index.html
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>   at 
> java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
>   at java.nio.file.Files.newOutputStream(Files.java:216)
>   at java.nio.file.Files.copy(Files.java:3016)
>   at 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler.respondAsLeader(StaticFileServerHandler.java:238)
>   at 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler.channelRead0(StaticFileServerHandler.java:197)
>   at 
> org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler.channelRead0(StaticFileServerHandler.java:99)
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
>   at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
> {code}
> Expect: 
> * extract all static resources on startup in main thread and before opening 
> http port.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6363) Document precedence rules of Kryo serializer registrations

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6363:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Document precedence rules of Kryo serializer registrations
> --
>
> Key: FLINK-6363
> URL: https://issues.apache.org/jira/browse/FLINK-6363
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System, Documentation
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Currently, there is no documentation / Javadoc mentioning the precedence 
> rules of Kryo registrations via the register methods in 
> {{StreamExecutionEnvironment}} / {{ExecutionEnvironment}}.
> It is important for the user to be notified of the precedence because the 
> {{KryoSerializer}} applies the configurations in a specific order that is not 
> visible from the public API.
> For example:
> {code}
> env.addDefaultKryoSerializer(SomeClass.class, SerializerA.class);
> env.addDefaultKryoSerializer(SomeClass.class, new SerializerB());
> {code}
> from this API usage, it may seem as if {{SerializerA}} will be used as the 
> default serializer for {{SomeClass}} (or the other way around, depends really 
> on how the user perceives this).
> However, whatever the called order in this example, {{SerializerB}} will 
> always be used because in the case of defining default serializers, due to 
> the ordering that the internal {{KryoSerializer}} applies these 
> configurations, defining default serializer by instance has a higher 
> precedence than defining by class. Since the existence of this precedence is 
> not due to Kryo's behaviour, but due to the applied ordering in 
> {{KryoSerializer}}, users that are familiar with Kryo will be surprised by 
> the unexpected results.
> These methods are also subject to the same issue:
> {code}
> env.registerType(SomeClass.class, SerializerA.class);
> env.registerTypeWithKryoSerializer(SomeClass.class, SerializerA.class);
> env.registerTypeWithKryoSerializer(SomeClass.class, new SerializerB());
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6424) Add basic helper functions for map type

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6424:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add basic helper functions for map type
> ---
>
> Key: FLINK-6424
> URL: https://issues.apache.org/jira/browse/FLINK-6424
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Timo Walther
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> FLINK-6377 introduced the map type for the Table & SQL API. We still need to 
> implement functions around this type:
> - the value constructor in SQL that constructs a map {{MAP ‘[’ key, value [, 
> key, value ]* ‘]’}}
> - the value constructur in Table API {{map(key, value,...)}} (syntax up for 
> discussion)
> - {{ELEMENT, CARDINALITY}} for SQL API
> - {{.at(), .cardinality(), and .element()}} for Table API in Scala & Java



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6477) The first time to click Taskmanager cannot get the actual data

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6477:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> The first time to click Taskmanager cannot get the actual data
> --
>
> Key: FLINK-6477
> URL: https://issues.apache.org/jira/browse/FLINK-6477
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.2.0
>Reporter: zhihao chen
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
> Attachments: errDisplay.jpg
>
>
> Flink web page first click Taskmanager to get less than the actual data, when 
> the parameter “jobmanager.web.refresh-interval” is set to a larger value, eg: 
> 180, if you do not manually refresh the page you need to wait time after 
> the timeout normal display



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6761) Limitation for maximum state size per key in RocksDB backend

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6761:
--
  Labels: auto-deprioritized-critical auto-deprioritized-major 
auto-deprioritized-minor  (was: auto-deprioritized-critical 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Limitation for maximum state size per key in RocksDB backend
> 
>
> Key: FLINK-6761
> URL: https://issues.apache.org/jira/browse/FLINK-6761
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.2.1, 1.3.0
>Reporter: Stefan Richter
>Priority: Not a Priority
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> auto-deprioritized-minor
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and 
> values. 
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can 
> currently merge multiple {{byte[]}} under one key, which will be internally 
> concatenated to one value in RocksDB. 
> This becomes problematic, as soon as the accumulated state size under one key 
> grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to 
> access a state that grew beyond this limit through merging, we will encounter 
> an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that 
> exceed this limitation, but on access (e.g. in checkpointing), the code fails 
> unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build 
> on {{(Direct)ByteBuffer}} - which can go around the size limitation - as 
> input and output types, instead of simple {{byte[]}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6767) Cannot load user class on local environment

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6767:
--
  Labels: CollocatedComputation Flink Ignite LocalEvironment 
auto-deprioritized-critical auto-deprioritized-major auto-deprioritized-minor  
(was: CollocatedComputation Flink Ignite LocalEvironment 
auto-deprioritized-critical auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Cannot load user class on local environment
> ---
>
> Key: FLINK-6767
> URL: https://issues.apache.org/jira/browse/FLINK-6767
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.2.1
> Environment: Flink 1.2.1 running on local environment on a Ignite 2.0 
> node.
>Reporter: Matt
>Priority: Not a Priority
>  Labels: CollocatedComputation, Flink, Ignite, LocalEvironment, 
> auto-deprioritized-critical, auto-deprioritized-major, 
> auto-deprioritized-minor
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> There is a bug in Flink 1.2.1 that results in a "cannot load user class" 
> exception even when the class is available in the current class loader of the 
> thread running the job. The problem arises when you execute a Flink job on a 
> local environment inside an Ignite 2.0 node. This happens on possibly all 
> other versions of Flink and Ignite.
> This bug was discussed in [1], and a fix was proposed in [2].
> In summary, the fix requires replacing line 298 in 
> *BlobLibraryCacheManager.java* [3] for:
> {code:java}
> this.classLoader = new FlinkUserCodeClassLoader(libraryURLs, 
> Thread.currentThread().getContextClassLoader());
> {code}
> A repository with a complete test case reproducing the error is found in [4]. 
>  The idea behind the code is being able to run Flink jobs in a collocated way 
> (ie, on the node where the data is stored), minimizing network traffic and 
> thus improving the performance.
> The README file contains details on how to run it and the resulting exception:
> {code}
> org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load 
> user class: com.test.Source
> ClassLoader info: URL ClassLoader:
> Class not resolvable through given classloader.
> {code}
> [1] 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/BUG-Cannot-Load-User-Class-on-Local-Environment-td12799.html
> [2] 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/BUG-Cannot-Load-User-Class-on-Local-Environment-tp12799p13376.html
> [3] 
> https://github.com/apache/flink/blob/release-1.2/flink-runtime/src/main/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheManager.java#L298
> [4] https://github.com/Dromit/FlinkTest



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18203: [FLINK-25460][core] Update slf4j-api dependency to 1.7.32

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18203:
URL: https://github.com/apache/flink/pull/18203#issuecomment-1001460756


   
   ## CI report:
   
   * 766445d2c9108cc2cca0b2a0231b172eb76a7b62 UNKNOWN
   * 333b991392c6834570ee50189b6e687bcd07f4c9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28617)
 
   * ddba49ce6c8b2fbc0f410df8981cd0a1792f18dc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1001578474


   
   ## CI report:
   
   * 611af0ce8e2999f7896a259edab03b8d1103eb1c UNKNOWN
   * b0da7ee5efc7652b302161d8eab7989a29e5160c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28806)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18203: [FLINK-25460][core] Update slf4j-api dependency to 1.7.32

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18203:
URL: https://github.com/apache/flink/pull/18203#issuecomment-1001460756


   
   ## CI report:
   
   * 766445d2c9108cc2cca0b2a0231b172eb76a7b62 UNKNOWN
   * 333b991392c6834570ee50189b6e687bcd07f4c9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28617)
 
   * ddba49ce6c8b2fbc0f410df8981cd0a1792f18dc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MartijnVisser commented on pull request #18203: [FLINK-25460][core] Update slf4j-api dependency to 1.7.32

2022-01-02 Thread GitBox


MartijnVisser commented on pull request #18203:
URL: https://github.com/apache/flink/pull/18203#issuecomment-1003773372


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MartijnVisser commented on pull request #18209: [FLINK-25461][python] Update net.sf.py4j:py4j dependency to 0.10.9.3

2022-01-02 Thread GitBox


MartijnVisser commented on pull request #18209:
URL: https://github.com/apache/flink/pull/18209#issuecomment-1003773308


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15140: [FLINK-20628][connectors/rabbitmq2] RabbitMQ connector using new connector API

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #15140:
URL: https://github.com/apache/flink/pull/15140#issuecomment-795409010


   
   ## CI report:
   
   * 9ef2375e64fc96054e741bf6f617664679e599f2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28841)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15140: [FLINK-20628][connectors/rabbitmq2] RabbitMQ connector using new connector API

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #15140:
URL: https://github.com/apache/flink/pull/15140#issuecomment-795409010


   
   ## CI report:
   
   * 092da6c48a5e75f87889694d033bd1d4c6667b09 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28622)
 
   * 9ef2375e64fc96054e741bf6f617664679e599f2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28841)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15140: [FLINK-20628][connectors/rabbitmq2] RabbitMQ connector using new connector API

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #15140:
URL: https://github.com/apache/flink/pull/15140#issuecomment-795409010


   
   ## CI report:
   
   * 092da6c48a5e75f87889694d033bd1d4c6667b09 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28622)
 
   * 9ef2375e64fc96054e741bf6f617664679e599f2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * 9bac82aa06c27530da9386b7d0ad498ddc36647a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28840)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * d89c0e4052617369212080fbf4a5d47d5de99088 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28831)
 
   * 9bac82aa06c27530da9386b7d0ad498ddc36647a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28840)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18235: Test change default batch configuration

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18235:
URL: https://github.com/apache/flink/pull/18235#issuecomment-1002631269


   
   ## CI report:
   
   * d89c0e4052617369212080fbf4a5d47d5de99088 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28831)
 
   * 9bac82aa06c27530da9386b7d0ad498ddc36647a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #18068: [FLINK-25105][checkpoint] Enables final checkpoint by default

2022-01-02 Thread GitBox


gaoyunhaii commented on pull request #18068:
URL: https://github.com/apache/flink/pull/18068#issuecomment-1003721880


   Very thanks @pnowojski for the review! I'll merge the PR now~ 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18243: [hotfix] remove unused LOG reference

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18243:
URL: https://github.com/apache/flink/pull/18243#issuecomment-1002852907


   
   ## CI report:
   
   * 612edb08f629881252b9fa46d7061a65ea24b78b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28837)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-25414) Provide metrics to measure how long task has been blocked

2022-01-02 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-25414.
--
Fix Version/s: 1.15.0
   Resolution: Fixed

Merged to master as bc22d2b90cb..1166d11f61a

> Provide metrics to measure how long task has been blocked
> -
>
> Key: FLINK-25414
> URL: https://issues.apache.org/jira/browse/FLINK-25414
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Metrics, Runtime / Task
>Affects Versions: 1.14.2
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Currently back pressured/busy metrics tell the user whether task is 
> blocked/busy and how much % of the time it is blocked/busy. But they do not 
> tell how for how long single block event is lasting. It can be 1ms or 1h and 
> back pressure/busy would be still reporting 100%.
> In order to improve this, we could provide two new metrics:
> # maxSoftBackPressureTime
> # maxHardBackPressureTime
> The max would be reset to 0 periodically or on every access to the metric 
> (via metric reporter). Soft back pressure would be if task is back pressured 
> in a non blocking fashion (StreamTask detected in availability of the 
> output). Hard back pressure would measure the time task is actually blocked.
> In order to calculate those metrics I'm proposing to split the already 
> existing backPressuredTimeMsPerSecond into soft and hard versions as well.
> Unfortunately I don't know how to efficiently provide similar metric for busy 
> time, without impacting max throughput.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] pnowojski merged pull request #18181: [FLINK-25414][metrics] Provide metrics to measure how long task has been blocked

2022-01-02 Thread GitBox


pnowojski merged pull request #18181:
URL: https://github.com/apache/flink/pull/18181


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #18023: [FLINK-25032] Allow to create execution vertices and execution edges lazily

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18023:
URL: https://github.com/apache/flink/pull/18023#issuecomment-986704278


   
   ## CI report:
   
   * b0182326e201040a6a7a1e02d2e78ce4728e2820 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28835)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25483) When FlinkSQL writes ES, it will not write and update the null value field

2022-01-02 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-25483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17467621#comment-17467621
 ] 

陈磊 commented on FLINK-25483:


Hi, [~MartijnVisser] .Glad you can reply to my question. In the current 
implementation of FlinkSQL writing ES, it writes all rows of data into ES 
Document. If the currently written data contains a null value field, it will 
also overwrite the value in the ES Document field. However, this is not 
expected for many users. In some actual business scenarios, the data that the 
user expects to write is non-null data, and for the null value field, it is not 
expected that it will overwrite the original field value in the ES.

For example: the source data has 3 fields, a, b, c
insert into table2
select
a,b,c
from table1

When the b field is null, the user expects the a_value and c_value fields to 
actually be written into the ES.

In fact, what is written to ES is: a_value, null, c_value

> When FlinkSQL writes ES, it will not write and update the null value field
> --
>
> Key: FLINK-25483
> URL: https://issues.apache.org/jira/browse/FLINK-25483
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Reporter: 陈磊
>Priority: Minor
>
> Using Flink SQL to consume Kafka to write ES, sometimes some fields do not 
> exist, and those that do not exist do not want to write ES, how to deal with 
> this situation?
> For example: the source data has 3 fields, a, b, c
> insert into table2
> select
> a,b,c
> from table1
> When b=null, only hope to write a and c
> When c=null, only hope to write a and b
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-12097) Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead limit exceeded

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-12097:
---
Labels: performance stale-minor  (was: performance)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and 
> seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead 
> limit exceeded
> -
>
> Key: FLINK-12097
> URL: https://issues.apache.org/jira/browse/FLINK-12097
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.7.2
>Reporter: xu
>Priority: Minor
>  Labels: performance, stale-minor
> Attachments: DSL.txt
>
>
> Hi Experts,
>  There is a Flink application(Version 1.7.2) which is written in Flink SQL, 
> and the SQL in the application is quite long, consists of about 30 tables, 
> 1500 lines in total. When executing, I found it is hanged in 
> StreamTableEnvironment.sqlUpdate, keep executing some code about calcite and 
> the memory usage keeps grown up, after several minutes 
> java.lang.OutOfMemoryError: GC overhead limit exceeded is got.
>   
>  I get some thread dumps:
>          at 
> org.apache.calcite.plan.volcano.RuleQueue.popMatch(RuleQueue.java:475)
>          at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:640)
>          at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
>          at 
> org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
>          at 
> org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
>          at 
> org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
>          at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
>          at 
> org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:344)
>          at 
> org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:879)
>          at 
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:817)
>          at 
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:777)
>   
>   
>          at java.io.PrintWriter.write(PrintWriter.java:473)
>          at 
> org.apache.calcite.rel.AbstractRelNode$1.explain_(AbstractRelNode.java:415)
>          at 
> org.apache.calcite.rel.externalize.RelWriterImpl.done(RelWriterImpl.java:156)
>          at 
> org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:312)
>          at 
> org.apache.calcite.rel.AbstractRelNode.computeDigest(AbstractRelNode.java:420)
>          at 
> org.apache.calcite.rel.AbstractRelNode.recomputeDigest(AbstractRelNode.java:356)
>          at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:350)
>          at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1484)
>          at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:859)
>          at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:879)
>          at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1755)
>          at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
>          at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
>          at 
> org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141)
>          at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
>          at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:646)
>          at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
>          at 
> org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
>          at 
> org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
>          at 
> org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
>          at 

[jira] [Updated] (FLINK-6800) PojoSerializer ignores added pojo fields

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6800:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> PojoSerializer ignores added pojo fields
> 
>
> Key: FLINK-6800
> URL: https://issues.apache.org/jira/browse/FLINK-6800
> Project: Flink
>  Issue Type: Bug
>  Components: API / Type Serialization System
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Till Rohrmann
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> The {{PojoSerializer}} contains a list of pojo fields which are represented 
> as {{Field}} instances. Upon serialization the names of these fields are 
> serialized. When being deserialized these names are used to look up the 
> respective {{Fields}} of a dynamically loaded class. If the dynamically 
> loaded class has additional fields (compared to when the serializer was 
> serialized), then these fields will be ignored (for the read and for the 
> write path). While this is necessary to read stored data, it is dangerous 
> when writing new data, because all newly added fields won't be serialized. 
> This subtleness is really hard to detect for the user. Therefore, I think we 
> should eagerly fail if the newly loaded type contains new fields which 
> haven't been present before.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24349) Support customized Calalogs via JDBC

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24349:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support customized Calalogs via JDBC
> 
>
> Key: FLINK-24349
> URL: https://issues.apache.org/jira/browse/FLINK-24349
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.15.0
>Reporter: Bo Cui
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Support customized catalogs in flink-connector-jdbc



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20696) Yarn Session Blob Directory is not deleted.

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20696:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Yarn Session Blob Directory is not deleted.
> ---
>
> Key: FLINK-20696
> URL: https://issues.apache.org/jira/browse/FLINK-20696
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.3, 1.12.0, 1.13.0
>Reporter: Ada Wong
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: image-2020-12-21-16-47-37-278.png
>
>
> This Job is finished, but blob directory is not deleted.
> There is a small probability that this problem will occur, when I submit so 
> many jobs .
>   !image-2020-12-21-16-47-37-278.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14173) ANSI-style JOIN with Temporal Table Function fails

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14173:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> ANSI-style JOIN with Temporal Table Function fails
> --
>
> Key: FLINK-14173
> URL: https://issues.apache.org/jira/browse/FLINK-14173
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
> Environment: Java 1.8, Scala 2.11, Flink 1.9 (pom.xml file attached)
>Reporter: Benoît Paris
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: flink-test-temporal-tables-1.9.zip
>
>
> The planner fails to generate a plan for ANSI-style joins with Temporal Table 
> Functions. The Blink planners throws with a "Missing conversion is 
> LogicalTableFunctionScan[convention: NONE -> LOGICAL]" message (and some very 
> fancy graphviz stuff). The old planner does a "This exception indicates that 
> the query uses an unsupported SQL feature."
> This fails:
> {code:java}
>  SELECT 
>o_amount * r_amount AS amount 
>  FROM Orders 
>  JOIN LATERAL TABLE (Rates(o_proctime)) 
>ON r_currency = o_currency {code}
> This works:
> {code:java}
>  SELECT 
>o_amount * r_amount AS amount 
>  FROM Orders 
> , LATERAL TABLE (Rates(o_proctime)) 
>  WHERE r_currency = o_currency{code}
> Reproduction with the attached Java and pom.xml files. Also included: stack 
> traces for both Blink and the old planner.
> I think this is a regression. I remember using ANSI-style joins with a 
> temporal table function successfully in 1.8.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6969) Add support for deferred computation for group window aggregates

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6969:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add support for deferred computation for group window aggregates
> 
>
> Key: FLINK-6969
> URL: https://issues.apache.org/jira/browse/FLINK-6969
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Fabian Hueske
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
> Attachments: screenshot-1.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Deferred computation is a strategy to deal with late arriving data and avoid 
> updates of previous results. Instead of computing a result as soon as it is 
> possible (i.e., when a corresponding watermark was received), deferred 
> computation adds a configurable amount of slack time in which late data is 
> accepted before the result is compute. For example, instead of computing a 
> tumbling window of 1 hour at each full hour, we can add a deferred 
> computation interval of 15 minute to compute the result quarter past each 
> full hour.
> This approach adds latency but can reduce the number of update esp. in use 
> cases where the user cannot influence the generation of watermarks. It is 
> also useful if the data is emitted to a system that cannot update result 
> (files or Kafka). The deferred computation interval should be configured via 
> the {{QueryConfig}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21153) yarn-per-job deployment target ignores yarn options

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21153:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
usability  (was: auto-deprioritized-major auto-unassigned stale-minor usability)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> yarn-per-job deployment target ignores yarn options
> ---
>
> Key: FLINK-21153
> URL: https://issues.apache.org/jira/browse/FLINK-21153
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.12.1, 1.13.0
>Reporter: Till Rohrmann
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, usability
>
> While looking into the problem reported in FLINK-6949, I stumbled across an 
> odd behaviour of Flink. I tried to deploy a Flink cluster on Yarn and ship 
> some files to the cluster. Only the first command successfully shipped the 
> additional files to the cluster:
> 1) {{bin/flink run -p 1 --yarnship ../flink-test-job/cluster -m yarn-cluster 
> ../flink-test-job/target/flink-test-job-1.0-SNAPSHOT.jar}}
> 2) {{bin/flink run -p 1 --yarnship ../flink-test-job/cluster -t yarn-per-job 
> ../flink-test-job/target/flink-test-job-1.0-SNAPSHOT.jar}} 
> The problem seems to be that the second command does not activate the 
> {{FlinkYarnSessionCli}} but uses the {{GenericCLI}}.
> [~kkl0u], [~aljoscha], [~tison] what is the intended behaviour in this case. 
> I always thought that {{-m yarn-cluster}} and {{-t yarn-per-job}} would be 
> equivalent.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22551) checkpoints: strange behaviour

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22551:
---
Labels: auto-deprioritized-critical auto-deprioritized-major stale-minor  
(was: auto-deprioritized-critical auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> checkpoints: strange behaviour 
> ---
>
> Key: FLINK-22551
> URL: https://issues.apache.org/jira/browse/FLINK-22551
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
> Environment: {code:java}
>  java -version
> openjdk version "11.0.2" 2019-01-15
> OpenJDK Runtime Environment 18.9 (build 11.0.2+9)
> OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode)
> {code}
>Reporter: buom
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> stale-minor
>
> * +*Case 1*:+ Work as expected
> {code:java}
> public class Example {
> public static class ExampleSource extends RichSourceFunction
> implements CheckpointedFunction {
> private volatile boolean isRunning = true;
> @Override
> public void open(Configuration parameters) throws Exception {
> System.out.println("[source] invoke open()");
> }
> @Override
> public void close() throws Exception {
> isRunning = false;
> System.out.println("[source] invoke close()");
> }
> @Override
> public void run(SourceContext ctx) throws Exception {
> System.out.println("[source] invoke run()");
> while (isRunning) {
> ctx.collect("Flink");
> Thread.sleep(500);
> }
> }
> @Override
> public void cancel() {
> isRunning = false;
> System.out.println("[source] invoke cancel()");
> }
> @Override
> public void snapshotState(FunctionSnapshotContext context) throws 
> Exception {
> System.out.println("[source] invoke snapshotState()");
> }
> @Override
> public void initializeState(FunctionInitializationContext context) 
> throws Exception {
> System.out.println("[source] invoke initializeState()");
> }
> }
> public static class ExampleSink extends PrintSinkFunction
> implements CheckpointedFunction {
> @Override
> public void snapshotState(FunctionSnapshotContext context) throws 
> Exception {
> System.out.println("[sink] invoke snapshotState()");
> }
> @Override
> public void initializeState(FunctionInitializationContext context) 
> throws Exception {
> System.out.println("[sink] invoke initializeState()");
> }
> }
> public static void main(String[] args) throws Exception {
> final StreamExecutionEnvironment env =
> 
> StreamExecutionEnvironment.getExecutionEnvironment().enableCheckpointing(1000);
> DataStream stream = env.addSource(new ExampleSource());
> stream.addSink(new ExampleSink()).setParallelism(1);
> env.execute();
> }
> }
> {code}
> {code:java}
> $ java -jar ./example.jar
> [sink] invoke initializeState()
> [source] invoke initializeState()
> [source] invoke open()
> [source] invoke run()
> Flink
> [sink] invoke snapshotState()
> [source] invoke snapshotState()
> Flink
> Flink
> [sink] invoke snapshotState()
> [source] invoke snapshotState()
> Flink
> Flink
> [sink] invoke snapshotState()
> [source] invoke snapshotState()
> ^C
> {code}
>  * *+Case 2:+* Run as unexpected (w/ _parallelism = 1_)
> {code:java}
> public class Example {
> public static class ExampleSource extends RichSourceFunction
> implements CheckpointedFunction {
> private volatile boolean isRunning = true;
> @Override
> public void open(Configuration parameters) throws Exception {
> System.out.println("[source] invoke open()");
> }
> @Override
> public void close() throws Exception {
> isRunning = false;
> System.out.println("[source] invoke close()");
> }
> @Override
> public void run(SourceContext ctx) throws Exception {
> System.out.println("[source] invoke run()");
> while (isRunning) {
> ctx.collect("Flink");
> 

[jira] [Updated] (FLINK-22075) Incorrect null outputs in left join

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22075:
---
Labels: auto-deprioritized-critical auto-deprioritized-major 
auto-unassigned stale-minor  (was: auto-deprioritized-critical 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Incorrect null outputs in left join
> ---
>
> Key: FLINK-22075
> URL: https://issues.apache.org/jira/browse/FLINK-22075
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.2
> Environment: 
> https://github.com/jamii/streaming-consistency/blob/4e5d144dacf85e512bdc7afd77d031b5974d733e/pkgs.nix#L25-L46
> ```
> [nix-shell:~/streaming-consistency/flink]$ java -version
> openjdk version "1.8.0_265"
> OpenJDK Runtime Environment (build 1.8.0_265-ga)
> OpenJDK 64-Bit Server VM (build 25.265-bga, mixed mode)
> [nix-shell:~/streaming-consistency/flink]$ flink --version
> Version: 1.12.2, Commit ID: 4dedee0
> [nix-shell:~/streaming-consistency/flink]$ nix-info
> system: "x86_64-linux", multi-user?: yes, version: nix-env (Nix) 2.3.10, 
> channels(jamie): "", channels(root): "nixos-20.09.3554.f8929dce13e", nixpkgs: 
> /nix/var/nix/profiles/per-user/root/channels/nixos
> ```
>Reporter: Jamie Brandon
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> auto-unassigned, stale-minor
>
> I'm left joining a table with itself 
> [here](https://github.com/jamii/streaming-consistency/blob/4e5d144dacf85e512bdc7afd77d031b5974d733e/flink/src/main/java/Demo.java#L55-L66).
>  The output should have no nulls, or at least emit nulls and then retract 
> them. Instead I see:
> ```
> jamie@machine:~/streaming-consistency/flink$ wc -l tmp/outer_join_with_time
> 10 tmp/outer_join_with_time
> jamie@machine:~/streaming-consistency/flink$ grep -c insert 
> tmp/outer_join_with_time
> 10
> jamie@machine:~/streaming-consistency/flink$ grep -c 'null' 
> tmp/outer_join_with_time
> 16943
> ```
> ~1.7% of the outputs are incorrect and never retracted.
> [Full 
> output](https://gist.githubusercontent.com/jamii/983fee41609b1425fe7fa59d3249b249/raw/069b9dcd4faf9f6113114381bc7028c6642ca787/gistfile1.txt)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18717) reuse MiniCluster in table integration test class ?

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18717:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> reuse MiniCluster in table integration test class ? 
> 
>
> Key: FLINK-18717
> URL: https://issues.apache.org/jira/browse/FLINK-18717
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: godfrey he
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> before 1.11, {{MiniCluster}} can be reused in each integration test class. 
> (see TestStreamEnvironment#setAsContext) 
> In 1.11, after we correct the execution behavior of TableEnvironment, 
> StreamTableEnvironment and BatchTableEnvironment (see 
> [FLINK-16363|https://issues.apache.org/jira/browse/FLINK-16363], 
> [FLINK-17126|https://issues.apache.org/jira/browse/FLINK-17126]), MiniCluster 
> will be created for each test case even in same test class (see 
> {{org.apache.flink.client.deployment.executors.LocalExecutor}}). It's better 
> we can reuse {{MiniCluster}} like before. One approach is we provide a new 
> kind of  MiniCluster factory (such as: SessionMiniClusterFactory) instead of 
> using  {{PerJobMiniClusterFactory}}. WDYT ?
>   



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-16320) Can not use sub-queries in the VALUES clause

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-16320:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Can not use sub-queries in the VALUES clause 
> -
>
> Key: FLINK-16320
> URL: https://issues.apache.org/jira/browse/FLINK-16320
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> {code}
> StreamExecutionEnvironment sEnv = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> StreamTableEnvironment tableEnvironment = StreamTableEnvironment.create(
>   sEnv,
>   EnvironmentSettings.newInstance().useBlinkPlanner().build());
> Table table = tableEnvironment.sqlQuery("SELECT * FROM (VALUES(1), (SELECT 
> 1))");
> tableEnvironment.toRetractStream(table, Row.class).print();
> System.out.println(tableEnvironment.explain(table));
> {code}
> Produces:
> {code}
> == Optimized Logical Plan ==
> Union(all=[true], union=[EXPR$0])
> :- Calc(select=[CAST(1) AS EXPR$0])
> :  +- Values(type=[RecordType(INTEGER ZERO)], tuples=[[{ 0 }]])
> +- Calc(select=[$f0 AS EXPR$0])
>+- Join(joinType=[LeftOuterJoin], where=[true], select=[ZERO, $f0], 
> leftInputSpec=[NoUniqueKey], rightInputSpec=[JoinKeyContainsUniqueKey])
>   :- Exchange(distribution=[single])
>   :  +- Values(type=[RecordType(INTEGER ZERO)], tuples=[[{ 0 }]], 
> reuse_id=[1])
>   +- Exchange(distribution=[single])
>  +- GroupAggregate(select=[SINGLE_VALUE(EXPR$0) AS $f0])
> +- Exchange(distribution=[single])
>+- Calc(select=[1 AS EXPR$0])
>   +- Reused(reference_id=[1])
> {code}
> which is wrong.
> Legacy planner fails with:
> {code}
> validated type:
> RecordType(INTEGER EXPR$0) NOT NULL
> converted type:
> RecordType(INTEGER NOT NULL EXPR$0) NOT NULL
> rel:
> LogicalProject(EXPR$0=[$0])
>   LogicalUnion(all=[true])
> LogicalProject(EXPR$0=[1])
>   LogicalValues(tuples=[[{ 0 }]])
> LogicalAggregate(group=[{}], agg#0=[SINGLE_VALUE($0)])
>   LogicalProject(EXPR$0=[1])
> LogicalValues(tuples=[[{ 0 }]])
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22004) Translate Flink Roadmap to Chinese.

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22004:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Translate Flink Roadmap to Chinese.
> ---
>
> Key: FLINK-22004
> URL: https://issues.apache.org/jira/browse/FLINK-22004
> Project: Flink
>  Issue Type: New Feature
>  Components: Documentation
>Reporter: Yuan Mei
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
> Attachments: Screen Shot 2021-04-11 at 10.24.02 PM.png
>
>
> https://flink.apache.org/roadmap.html



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-10737) FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint failed on Travis

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-10737:
---
Labels: stale-assigned test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint failed on Travis
> 
>
> Key: FLINK-10737
> URL: https://issues.apache.org/jira/browse/FLINK-10737
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.7.0, 1.8.0, 1.12.5, 1.14.1
>Reporter: Till Rohrmann
>Assignee: Fabian Paul
>Priority: Critical
>  Labels: stale-assigned, test-stability
>
> The {{FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint}} failed on 
> Travis:
> https://api.travis-ci.org/v3/job/448781612/log.txt



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14297) Temporal Table Function Build Side does not accept a constant key

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14297:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Temporal Table Function Build Side does not accept a constant key
> -
>
> Key: FLINK-14297
> URL: https://issues.apache.org/jira/browse/FLINK-14297
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
> Environment: Java 1.8, Scala 2.11, Flink 1.9 (pom.xml file attached)
>Reporter: Benoît Paris
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: flink-test-temporal-constant-key-build-side.zip
>
>
> When defining a table that will be used as the build side on a Temporal Table 
> Function, a constant key will not be accepted:
> In:
> {code:java}
> Table ratesHistory = tEnv.sqlQuery(sql);
> TemporalTableFunction rates = 
> ratesHistory.createTemporalTableFunction("r_proctime", "r_currency");
> {code}
>  This crashes: 
> {code:java}
> SELECT 
>  'Eur1' AS r_currency,
>  r_amount, 
>  r_proctime 
> FROM RatesHistory{code}
>  Making a type verification in Calcite fail: 
> RelOptUtil.verifyTypeEquivalence, when trying to join the Lateral Table 
> Function. It seems like this is a corner case in nullability, the error is:  
> {code:java}
> (Blink) 
> Apply rule [LogicalCorrelateToJoinFromTemporalTableFunctionRule] [...]
> (old planner) 
> Apply rule [LogicalCorrelateToTemporalTableJoinRule] [...]
> Exception in thread "main" java.lang.AssertionError: Cannot add expression of 
> different type to set:
> set type is RecordType(
>   [...] VARCHAR(65536) CHARACTER SET "UTF-16LE"  r_currency, 
> [...]) NOT NULL
> expression type is RecordType(
>   [...] CHAR(4)CHARACTER SET "UTF-16LE" NOT NULL r_currency, 
> [...]) NOT NULL{code}
>  (formatting and commenting mine)
> No problem in VARCHAR vs CHAR, as using the following works: 
> {code:java}
> SELECT 
>  COALESCE('Eur1', r_currency) AS r_currency, 
>  r_amount, 
>  r_proctime 
> FROM RatesHistory{code}
>  The problem is coming from nullable vs NOT NULL
> Attached is Java reproduction code, pom.xml, and both blink and old planner 
> logs and stacktraces.
> 
> My speculations on this is that an earlier transformation infers and 
> normalizes the key type (or maybe gets it from the query side?), but the 
> decorrelation and special temporal table function case happens later.
> Reordering the rules could help? Maybe way too heavy handed.
> Or do this 
> [rexBuilder.makeInputRef|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/LogicalCorrelateToJoinFromTemporalTableFunctionRule.scala#L145]
>  in a type-compatible way.
> 
> This seems to be related to another issue:
> https://issues.apache.org/jira/browse/FLINK-14173
> Where careful support of the the nullability of the build side key in a LEFT 
> JOIN will take part in the output.
> 
> This might seem like a useless use case, but a constant key is the only way 
> to access in SQL a Temporal Table Function for a global value (like querying 
> a global current number)
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-15826) Add renameFunction() to Catalog

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15826:
---
Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available stale-assigned  (was: auto-deprioritized-major 
auto-deprioritized-minor pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add renameFunction() to Catalog
> ---
>
> Key: FLINK-15826
> URL: https://issues.apache.org/jira/browse/FLINK-15826
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Fabian Hueske
>Assignee: Shen Zhu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available, stale-assigned
>
> The {{Catalog}} interface lacks a method to rename a function.
> It is possible to change all properties (via {{alterFunction()}}) but it is 
> not possible to rename a function.
> A {{renameTable()}} method is exists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22517) Fix pickle compatibility problem in different Python versions

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22517:
---
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Fix pickle compatibility problem in different Python versions
> -
>
> Key: FLINK-22517
> URL: https://issues.apache.org/jira/browse/FLINK-22517
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0, 1.12.3
>Reporter: Huang Xingbo
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> Since release-1.12, PyFlink has supported Python3 8. Starting from Python 
> 3.8, the default protocol version used by pickle is 
> pickle5(https://www.python.org/dev/peps/pep-0574/), which will raising the 
> following exception if the client uses python 3.8 to compile program and the 
> cluster node uses python 3.7 or python 3.6 to run python udf:
> {code:python}
> ValueError: unsupported pickle protocol: 5
> {code}
> The workaround is to first let the python version used by the client be 3.6 
> or 3.7. For how to specify the client-side python execution environment, 
> please refer to the 
> doc(https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/python/python_config.html#python-client-executable).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22190) no guarantee on Flink exactly_once sink to Kafka

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22190:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> no guarantee on Flink exactly_once sink to Kafka 
> -
>
> Key: FLINK-22190
> URL: https://issues.apache.org/jira/browse/FLINK-22190
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.12.2
> Environment: *flink: 1.12.2*
> *kafka: 2.7.0*
>Reporter: Spongebob
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> When I tried to test the function of flink exactly_once sink to kafka, I 
> found it can not run as expectation.  here's the pipline of the flink 
> applications: raw data(flink app0)-> kafka topic1 -> flink app1 -> kafka 
> topic2 -> flink app2, flink tasks may met / byZeroException in random. Below 
> shows the codes:
> {code:java}
> //代码占位符
> raw data, flink app0:
> class SimpleSource1 extends SourceFunction[String] {
>  var switch = true
>  val students: Array[String] = Array("Tom", "Jerry", "Gory")
>  override def run(sourceContext: SourceFunction.SourceContext[String]): Unit 
> = {
>  var i = 0
>  while (switch) {
>  sourceContext.collect(s"${students(Random.nextInt(students.length))},$i")
>  i += 1
>  Thread.sleep(5000)
>  }
>  }
>  override def cancel(): Unit = switch = false
> }
> val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment
> val dataStream = streamEnv.addSource(new SimpleSource1)
> dataStream.addSink(new FlinkKafkaProducer[String]("xfy:9092", 
> "single-partition-topic-2", new SimpleStringSchema()))
> streamEnv.execute("sink kafka")
>  
> flink-app1:
> val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment
> streamEnv.enableCheckpointing(1000, CheckpointingMode.EXACTLY_ONCE)
> val prop = new Properties()
> prop.setProperty("bootstrap.servers", "xfy:9092")
> prop.setProperty("group.id", "test")
> val dataStream = streamEnv.addSource(new FlinkKafkaConsumer[String](
>  "single-partition-topic-2",
>  new SimpleStringSchema,
>  prop
> ))
> val resultStream = dataStream.map(x => {
>  val data = x.split(",")
>  (data(0), data(1), data(1).toInt / Random.nextInt(5)).toString()
> }
> )
> resultStream.print().setParallelism(1)
> val propProducer = new Properties()
> propProducer.setProperty("bootstrap.servers", "xfy:9092")
> propProducer.setProperty("transaction.timeout.ms", s"${1000 * 60 * 5}")
> resultStream.addSink(new FlinkKafkaProducer[String](
>  "single-partition-topic",
>  new MyKafkaSerializationSchema("single-partition-topic"),
>  propProducer,
>  Semantic.EXACTLY_ONCE))
> streamEnv.execute("sink kafka")
>  
> flink-app2:
> val streamEnv = StreamExecutionEnvironment.getExecutionEnvironment
> val prop = new Properties()
> prop.setProperty("bootstrap.servers", "xfy:9092")
> prop.setProperty("group.id", "test")
> prop.setProperty("isolation_level", "read_committed")
> val dataStream = streamEnv.addSource(new FlinkKafkaConsumer[String](
>  "single-partition-topic",
>  new SimpleStringSchema,
>  prop
> ))
> dataStream.print().setParallelism(1)
> streamEnv.execute("consumer kafka"){code}
>  
> flink app1 will print some duplicate numbers, and to my expectation flink 
> app2 will deduplicate them but the fact shows not.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21877) Add E2E test for upsert-kafka connector

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21877:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add E2E test for upsert-kafka connector
> ---
>
> Key: FLINK-21877
> URL: https://issues.apache.org/jira/browse/FLINK-21877
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Shengkai Fang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6857:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Type Serialization System, Runtime / Configuration
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-6931) Support custom compression formats for checkpoints (+Upgrade/Compatibility)

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-6931:
--
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support custom compression formats for checkpoints (+Upgrade/Compatibility)
> ---
>
> Key: FLINK-6931
> URL: https://issues.apache.org/jira/browse/FLINK-6931
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Stefan Richter
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> With FLINK-6773, we introduced optional snappy compression for keyed state in 
> full checkpoints and savepoints. We should offer users a way to register 
> their own compression formats with the {{ExecutionConfig}}. For this, we 
> should also have a compatibility story, very similar to what 
> {{TypeSerializerConfigSnapshot}} doesfor type serializers.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22128) Window aggregation should have unique keys

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22128:
---
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Window aggregation should have unique keys
> --
>
> Key: FLINK-22128
> URL: https://issues.apache.org/jira/browse/FLINK-22128
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> We should add match method in {{FlinkRelMdUniqueKeys for 
> StreamPhysicalWindowAggregate}}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24731) Add a blank space for TransitiveClosureNaive

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24731:
---
Labels: pull-request-available stale-major  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add a blank space for TransitiveClosureNaive
> 
>
> Key: FLINK-24731
> URL: https://issues.apache.org/jira/browse/FLINK-24731
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Affects Versions: 1.13.3
>Reporter: liyunchao
>Priority: Major
>  Labels: pull-request-available, stale-major
>
> In 
> flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala/graph/TransitiveClosureNaive.scala
> Before:
> {code:java}
> (left, right) => (left._1,right._2)
> {code}
> After:
> {code:java}
> (left, right) => (left._1, right._2)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24745) Add support for Oracle OGG json parser in flink-json module

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24745:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add support for Oracle OGG json parser in flink-json module
> ---
>
> Key: FLINK-24745
> URL: https://issues.apache.org/jira/browse/FLINK-24745
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Steven
>Assignee: Steven
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> The data written to [kafka via Oracle OGG 
> CDC|https://docs.oracle.com/en/middleware/goldengate/big-data/19.1/gadbd/using-kafka-handler.html#GUID-2561CA12-9BAC-454B-A2E3-2D36C5C60EE5]
>  is not parsed by the current json parser (json, debezium), add a module that 
> can parse ogg json format



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-17378) KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceCustomOperator unstable

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17378:
---
Labels: auto-deprioritized-major stale-assigned test-stability  (was: 
auto-deprioritized-major test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceCustomOperator
>  unstable
> ---
>
> Key: FLINK-17378
> URL: https://issues.apache.org/jira/browse/FLINK-17378
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.1
>Reporter: Robert Metzger
>Assignee: Fabian Paul
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-assigned, test-stability
>
> CI run: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=221=logs=c5f0071e-1851-543e-9a45-9ac140befc32=684b1416-4c17-504e-d5ab-97ee44e08a20
> {code}
> 2020-04-25T00:41:01.4191956Z 00:41:01,418 [Source: Custom Source -> Map -> 
> Sink: Unnamed (1/1)] INFO  
> org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer
>  [] - Flushing new partitions
> 2020-04-25T00:41:01.4194268Z 00:41:01,418 [FailingIdentityMapper Status 
> Printer] INFO  
> org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper 
> [] - > Failing mapper  0: count=690, 
> totalCount=1000
> 2020-04-25T00:41:01.4589519Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-04-25T00:41:01.4590089Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-04-25T00:41:01.4590748Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:659)
> 2020-04-25T00:41:01.4591524Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2020-04-25T00:41:01.4592062Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1643)
> 2020-04-25T00:41:01.4592597Z  at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
> 2020-04-25T00:41:01.4593092Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnce(KafkaProducerTestBase.java:370)
> 2020-04-25T00:41:01.4593680Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnceCustomOperator(KafkaProducerTestBase.java:317)
> 2020-04-25T00:41:01.4594450Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-04-25T00:41:01.4595076Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-04-25T00:41:01.4595794Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-04-25T00:41:01.4596622Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-04-25T00:41:01.4597501Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-04-25T00:41:01.4598396Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-04-25T00:41:01.460Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-04-25T00:41:01.4603082Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-04-25T00:41:01.4604023Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-04-25T00:41:01.4604590Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-04-25T00:41:01.4605225Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-04-25T00:41:01.4605902Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-04-25T00:41:01.4606591Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-04-25T00:41:01.4607468Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-04-25T00:41:01.4608577Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-04-25T00:41:01.4609030Z  at 
> 

[jira] [Updated] (FLINK-21716)  Support higher precision for Data Type TIME(p)

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21716:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


>  Support higher precision for Data Type TIME(p)
> ---
>
> Key: FLINK-21716
> URL: https://issues.apache.org/jira/browse/FLINK-21716
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Leonard Xu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> Due to the historical reason, we only support TIME(3) yet, we can support 
> higher precision eg. TIME(9).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24320) Show in the Job / Checkpoints / Configuration if checkpoints are incremental

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24320:
---
Labels: auto-deprioritized-major beginner-friendly stale-assigned  (was: 
auto-deprioritized-major beginner-friendly)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Show in the Job / Checkpoints / Configuration if checkpoints are incremental
> 
>
> Key: FLINK-24320
> URL: https://issues.apache.org/jira/browse/FLINK-24320
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / Web Frontend
>Affects Versions: 1.13.2
>Reporter: Robert Metzger
>Assignee: Hangxiang Yu
>Priority: Minor
>  Labels: auto-deprioritized-major, beginner-friendly, 
> stale-assigned
> Attachments: image-2021-09-17-13-31-02-148.png, 
> image-2021-09-24-10-49-53-657.png
>
>
> It would be nice if the Configuration page would also show if incremental 
> checkpoints are enabled.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21583) Allow comments in CSV format without having to ignore parse errors

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21583:
---
Labels: auto-deprioritized-major auto-deprioritized-minor stale-assigned  
(was: auto-deprioritized-major auto-deprioritized-minor)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Allow comments in CSV format without having to ignore parse errors
> --
>
> Key: FLINK-21583
> URL: https://issues.apache.org/jira/browse/FLINK-21583
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.12.1
>Reporter: Nico Kruber
>Assignee: liwei li
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> stale-assigned
>
> Currently, when you pass {{'csv.allow-comments' = 'true'}} to a table 
> definition, you also have to set {{'csv.ignore-parse-errors' = 'true'}} to 
> actually skip the commented-out line (and the docs mention this prominently 
> as well). This, however, may mask actual parsing errors that you want to be 
> notified of.
> I would like to propose that {{allow-comments}} actually also skips the 
> commented-out lines automatically because these shouldn't be used anyway.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-22705) SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to fail to download the tar

2022-01-02 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22705:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor test-stability  
(was: auto-deprioritized-major stale-minor test-stability)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to 
> fail to download the tar
> --
>
> Key: FLINK-22705
> URL: https://issues.apache.org/jira/browse/FLINK-22705
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18100=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=18408
> {code:java}
> May 18 17:24:23 Preparing Elasticsearch (version=7)...
> May 18 17:24:23 Downloading Elasticsearch from 
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.5.1-no-jdk-linux-x86_64.tar.gz
>  ...
> --2021-05-18 17:24:23--  
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-7.5.1-no-jdk-linux-x86_64.tar.gz
> Resolving artifacts.elastic.co (artifacts.elastic.co)... 34.120.127.130, 
> 2600:1901:0:1d7::
> Connecting to artifacts.elastic.co 
> (artifacts.elastic.co)|34.120.127.130|:443... failed: Connection timed out.
> Connecting to artifacts.elastic.co 
> (artifacts.elastic.co)|2600:1901:0:1d7::|:443... failed: Network is 
> unreachable.
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0curl: (7) Failed to connect to localhost port 9200: Connection refused
> May 18 17:26:34 [FAIL] Test script contains errors.
> May 18 17:26:34 Checking for errors...
> May 18 17:26:34 No errors in log files.
> May 18 17:26:34 Checking for exceptions...
> May 18 17:26:34 No exceptions in log files.
> May 18 17:26:34 Checking for non-empty .out files...
> grep: /home/vsts/work/_temp/debug_files/flink-logs/*.out: No such file or 
> directory
> May 18 17:26:34 No non-empty .out files.
> May 18 17:26:34 
> May 18 17:26:34 [FAIL] 'SQL Client end-to-end test (Old planner) 
> Elasticsearch (v7.5.1)' failed after 2 minutes and 36 seconds! Test exited 
> with exit code 1
> May 18 17:26:34
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25483) When FlinkSQL writes ES, it will not write and update the null value field

2022-01-02 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-25483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

陈磊 updated FLINK-25483:
---
Issue Type: New Feature  (was: Improvement)

> When FlinkSQL writes ES, it will not write and update the null value field
> --
>
> Key: FLINK-25483
> URL: https://issues.apache.org/jira/browse/FLINK-25483
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Reporter: 陈磊
>Priority: Minor
>
> Using Flink SQL to consume Kafka to write ES, sometimes some fields do not 
> exist, and those that do not exist do not want to write ES, how to deal with 
> this situation?
> For example: the source data has 3 fields, a, b, c
> insert into table2
> select
> a,b,c
> from table1
> When b=null, only hope to write a and c
> When c=null, only hope to write a and b
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25502) eval method of Flink ScalerFunction only run one time

2022-01-02 Thread Spongebob (Jira)
Spongebob created FLINK-25502:
-

 Summary: eval method of Flink ScalerFunction only run one time
 Key: FLINK-25502
 URL: https://issues.apache.org/jira/browse/FLINK-25502
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.14.2
Reporter: Spongebob


assume that there is one scalerFunction named `id` which's eval method takes no 
arguments and return increasing int value on each calling. Now I found that 
when I call `id()` function in  FlinkSQL that has 3 rows , the eval method only 
was called one time so I got the same id value for each row. The sql likes 
'SELECT f0, id() FROM T'.

So I decided to define one argument on `eval` method.  When I execute sql 
'SELECT f0, id(1) FROM T' I got the same id value still. But when I execute sql 
'SELECT f0, id(f0) FROM T' then I could get the correct id value, because the 
eval method was called by three times now.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #18243: [hotfix] remove unused LOG reference

2022-01-02 Thread GitBox


flinkbot edited a comment on pull request #18243:
URL: https://github.com/apache/flink/pull/18243#issuecomment-1002852907


   
   ## CI report:
   
   * 24633f54f74d939dfe75fb0d75e5b3d56d557fb9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28729)
 
   * 612edb08f629881252b9fa46d7061a65ea24b78b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=28837)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >