[jira] [Commented] (FLINK-24456) Support bounded offset in the Kafka table connector

2021-11-13 Thread Haohui Mai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443250#comment-17443250
 ] 

Haohui Mai commented on FLINK-24456:


It just needs to be on par with the datastream api.

> Support bounded offset in the Kafka table connector
> ---
>
> Key: FLINK-24456
> URL: https://issues.apache.org/jira/browse/FLINK-24456
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Haohui Mai
>Assignee: ZhuoYu Chen
>Priority: Minor
>
> The {{setBounded}} API in the DataStream connector of Kafka is particularly 
> useful when writing tests. Unfortunately the table connector of Kafka lacks 
> the same API.
> It would be good to have this API added.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17684: [FLINK-23999][table-planner] Support evaluating individual window table-valued function in planner

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17684:
URL: https://github.com/apache/flink/pull/17684#issuecomment-961007710


   
   ## CI report:
   
   * 3a3f210519be65ea2e10954a45c87fc9e98a69af Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26489)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-968179643


   
   ## CI report:
   
   * e50aa0dc4cf3a5166f905a60b629765506de9003 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26488)
 
   * 11d9df98c5aa923f807d0664309dac4bd1c90f04 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26490)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-968179643


   
   ## CI report:
   
   * e50aa0dc4cf3a5166f905a60b629765506de9003 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26488)
 
   * 11d9df98c5aa923f807d0664309dac4bd1c90f04 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26490)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-968179643


   
   ## CI report:
   
   * e50aa0dc4cf3a5166f905a60b629765506de9003 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26488)
 
   * 11d9df98c5aa923f807d0664309dac4bd1c90f04 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-23944) PulsarSourceITCase.testTaskManagerFailure is instable

2021-11-13 Thread Yufan Sheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443228#comment-17443228
 ] 

Yufan Sheng commented on FLINK-23944:
-

Since FLINK-24733 has been fixed. Let's keep on watching this issue to see if 
it happen again.

> PulsarSourceITCase.testTaskManagerFailure is instable
> -
>
> Key: FLINK-23944
> URL: https://issues.apache.org/jira/browse/FLINK-23944
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Pulsar
>Affects Versions: 1.14.0
>Reporter: Dian Fu
>Assignee: Yufan Sheng
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0, 1.14.1
>
>
> [https://dev.azure.com/dianfu/Flink/_build/results?buildId=430=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d]
> It's from my personal azure pipeline, however, I'm pretty sure that I have 
> not touched any code related to this. 
> {code:java}
> Aug 24 10:44:13 [ERROR] testTaskManagerFailure{TestEnvironment, 
> ExternalContext, ClusterControllable}[1] Time elapsed: 258.397 s <<< FAILURE! 
> Aug 24 10:44:13 java.lang.AssertionError: Aug 24 10:44:13 Aug 24 10:44:13 
> Expected: Records consumed by Flink should be identical to test data and 
> preserve the order in split Aug 24 10:44:13 but: Mismatched record at 
> position 7: Expected '0W6SzacX7MNL4xLL3BZ8C3ljho4iCydbvxIl' but was 
> 'wVi5JaJpNvgkDEOBRC775qHgw0LyRW2HBxwLmfONeEmr' Aug 24 10:44:13 at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) Aug 24 10:44:13 
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8) Aug 24 
> 10:44:13 at 
> org.apache.flink.connectors.test.common.testsuites.SourceTestSuiteBase.testTaskManagerFailure(SourceTestSuiteBase.java:271)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17684: [FLINK-23999][table-planner] Support evaluating individual window table-valued function in planner

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17684:
URL: https://github.com/apache/flink/pull/17684#issuecomment-961007710


   
   ## CI report:
   
   * b6f55b35b31c5a1a8bff4c0e47ee65e83984c9b4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26477)
 
   * 3a3f210519be65ea2e10954a45c87fc9e98a69af Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26489)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17684: [FLINK-23999][table-planner] Support evaluating individual window table-valued function in planner

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17684:
URL: https://github.com/apache/flink/pull/17684#issuecomment-961007710


   
   ## CI report:
   
   * b6f55b35b31c5a1a8bff4c0e47ee65e83984c9b4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26477)
 
   * 3a3f210519be65ea2e10954a45c87fc9e98a69af UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-968179643


   
   ## CI report:
   
   * e50aa0dc4cf3a5166f905a60b629765506de9003 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26488)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-13 Thread GitBox


flinkbot commented on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-968179775


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e50aa0dc4cf3a5166f905a60b629765506de9003 (Sun Nov 14 
00:16:48 UTC 2021)
   
   **Warnings:**
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-15826).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15826) Add renameFunction() to Catalog

2021-11-13 Thread Shen Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443224#comment-17443224
 ] 

Shen Zhu commented on FLINK-15826:
--

Hey Jingsong([~lzljs3620320] ), I created a draft PR for this ticket, if 
possible, would you mind assigning this ticket to me? Thanks!

> Add renameFunction() to Catalog
> ---
>
> Key: FLINK-15826
> URL: https://issues.apache.org/jira/browse/FLINK-15826
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-minor
>
> The {{Catalog}} interface lacks a method to rename a function.
> It is possible to change all properties (via {{alterFunction()}}) but it is 
> not possible to rename a function.
> A {{renameTable()}} method is exists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-13 Thread GitBox


flinkbot commented on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-968179643


   
   ## CI report:
   
   * e50aa0dc4cf3a5166f905a60b629765506de9003 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-15826) Add renameFunction() to Catalog

2021-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-15826:
---
Labels: auto-deprioritized-major pull-request-available stale-minor  (was: 
auto-deprioritized-major stale-minor)

> Add renameFunction() to Catalog
> ---
>
> Key: FLINK-15826
> URL: https://issues.apache.org/jira/browse/FLINK-15826
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Fabian Hueske
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available, 
> stale-minor
>
> The {{Catalog}} interface lacks a method to rename a function.
> It is possible to change all properties (via {{alterFunction()}}) but it is 
> not possible to rename a function.
> A {{renameTable()}} method is exists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] shenzhu opened a new pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-13 Thread GitBox


shenzhu opened a new pull request #17788:
URL: https://github.com/apache/flink/pull/17788


   
   
   ## What is the purpose of the change
   
   Add `renameFunction()` to Catalog.
   
   
   ## Brief change log
   
   - Add `renameFunction()` to interface `Catalog.java`
   - Add implementations in `GenericInMemoryCatalog.java`, 
`AbstractJdbcCatalog.java`, `HiveCatalog.java`
   - Add unit tests in `CatalogTests.java`.
   
   
   ## Verifying this change
   This change added tests and can be verified as follows:
   
   Unit tests were added to `CatalogTests.java` to cover new function.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-13847) Update release scripts to also update docs/_config.yml

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13847:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Update release scripts to also update docs/_config.yml
> --
>
> Key: FLINK-13847
> URL: https://issues.apache.org/jira/browse/FLINK-13847
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Release System
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> During the 1.9.0 release process, we missed quite a few configuration updates 
> in {{docs/_config.yml}} related to Flink versions. This should be able to be 
> done automatically in via the release scripts.
> A list of settings in that file that needs to be touched on every major 
> release include:
> * version
> * version_title
> * github_branch
> * baseurl
> * stable_baseurl
> * javadocs_baseurl
> * pythondocs_baseurl
> * is_stable
> * Add new link to previous_docs
> This can probably be done via the 
> {{tools/releasing/create_release_branch.sh}} script, which is used for every 
> major release.
> We should also update the release guide in the project wiki to cover checking 
> that file as an item in checklists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13843) Unify and clean up StreamingFileSink format builders

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13843:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Unify and clean up StreamingFileSink format builders
> 
>
> Key: FLINK-13843
> URL: https://issues.apache.org/jira/browse/FLINK-13843
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Connectors / FileSystem
>Affects Versions: 1.10.0
>Reporter: Gyula Fora
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> I think the StreamingFileSink contains some problems that will affect us in 
> the long-run if we intend this sink to be the main exactly-once FS sink.
> *1. Code duplication*
> The StreamingFileSink currently has 2 builders for row and bulk formats:
> RowFormatBuilder, BulkFormatBuilder
> They both contain almost exactly the same config settings with a lot of code 
> duplication that should be moved to a common superclass 
> (StreamingFileSink.BucketsBuilder). 
> *2. Inconsistent config options*
> I also noticed some strange/invalid configuration settings for the builders:
>  - RowFormatBuilder#withBucketAssignerAndPolicy : feels like an internal 
> method that is not used anywhere. It also overwrites the bucket factory
> - BulkFormatBuilder#withBucketAssigner : takes an extra type parameter 
> compared to the row format for the bucket ID type
> -  BulkFormatBuilder#withBucketCheckInterval : does not affect behavior as it 
> always uses the OnCheckpointRollingPolicy
> This can probably solved by fixing the code duplication
> *3. Fragmented configuration*
> This is not a big problem but only affects the part file config options that 
> were introduced recently. We have added 2 methods: withPartFilePrefix and 
> withPartFileSuffix
> I think we should aim to group configs that belong together -> 
> withPartFileConfig
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13792) source and sink support manual rate limit

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13792:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> source and sink support manual rate limit
> -
>
> Key: FLINK-13792
> URL: https://issues.apache.org/jira/browse/FLINK-13792
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.8.1
>Reporter: zzsmdfj
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> in current flink implement automatic flow control by back pressure, it is 
> efficient for the most scene, but in some special scenario, do we need 
> fine-grained flow control to avoid impact on other systems? For example: if i 
> have window with days(a lot of datas), then do call ProcessWindowFunction 
> when trigger, this will produce a lot of data to sink, if sink to message 
> queue, it can have a huge impact to message queue. so if there is sink rate 
> limiter, it is friendly to external system. for source rate limiter, it is 
> appropriate for having window operator and accumulating a large amount of 
> historical data.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18165) When savingpoint is restored, select the checkpoint directory and stateBackend

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18165:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> When savingpoint is restored, select the checkpoint directory and stateBackend
> --
>
> Key: FLINK-18165
> URL: https://issues.apache.org/jira/browse/FLINK-18165
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0
> Environment: flink 1.9
>Reporter: Xinyuan Liu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> If the checkpoint file is used as the initial state of the savepoint startup, 
> it must be ensured that the state backend used before and after is the same 
> type, but in actual production, there will be more and more state, the 
> taskmanager memory is insufficient and the cluster cannot be expanded, and 
> the state backend needs to be switched at this time. And there is a need to 
> ensure data consistency. Unfortunately, currently flink does not provide an 
> elegant way to switch state backend, can the community consider this proposal



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13909) LinkElement does not support different anchors required for localization

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13909:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> LinkElement does not support different anchors required for localization
> 
>
> Key: FLINK-13909
> URL: https://issues.apache.org/jira/browse/FLINK-13909
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> While addressing FLINK-13898 we realized that the {{LinkElement}} does not 
> support multiple anchors which are needed to support localisation. Due to the 
> translation into Chinese the anchors are not the same across Flink's English 
> and Chinese documentation.
> Either we keep anchors the same in both versions or we have a way to support 
> multiple anchors, one for each localisation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18243) Flink SQL (1.10.0) From Kafkasource can not get data

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18243:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Flink SQL (1.10.0) From Kafkasource can not get data
> 
>
> Key: FLINK-18243
> URL: https://issues.apache.org/jira/browse/FLINK-18243
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0
> Environment: flink 1.10.0 elasticsearch 5.6.3
>Reporter: 颖
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> When Flink 1.10.0 Consumer Kafka cant read source from kafkatopicthrough 
> SQL data, the following problems occur:
> 2020-06-11 14:22:13,156 WARN  
> org.apache.flink.kafka.shaded.org.apache.kafka.clients.consumer.ConsumerConfig
>   - The configuration 'zookeeper.connect' was supplied but isn't a known 
> config.
> 2020-06-11 14:22:13,156 WARN  
> org.apache.flink.kafka.shaded.org.apache.kafka.clients.consumer.ConsumerConfig
>   - The configuration 'zookeeper.connect' was supplied but isn't a known 
> config.
> 2020-06-11 14:22:13,158 WARN  
> org.apache.flink.kafka.shaded.org.apache.kafka.common.utils.AppInfoParser  - 
> Error while loading kafka-version.properties: null
> 2020-06-11 14:22:13,158 INFO  
> org.apache.flink.kafka.shaded.org.apache.kafka.common.utils.AppInfoParser  - 
> Kafka version: unknown
> 2020-06-11 14:22:13,158 INFO  
> org.apache.flink.kafka.shaded.org.apache.kafka.common.utils.AppInfoParser  - 
> Kafka commitId: unknown
> 2020-06-11 14:22:13,160 INFO  
> org.apache.flink.kafka.shaded.org.apache.kafka.common.utils.AppInfoParser  - 
> Kafka version: unknown
> 2020-06-11 14:22:13,161 INFO  
> org.apache.flink.kafka.shaded.org.apache.kafka.common.utils.AppInfoParser  - 
> Kafka commitId: unknown
> 2020-06-11 14:22:13,299 INFO  
> org.apache.flink.kafka.shaded.org.apache.kafka.clients.Metadata  - Cluster 
> ID: y-5SdOxGQ6WBxzfG9PxrLw
> 2020-06-11 14:22:13,299 INFO  
> org.apache.flink.kafka.shaded.org.apache.kafka.clients.Metadata  - Cluster 
> ID: y-5SdOxGQ6WBxzfG9PxrLw
> 2020-06-11 14:22:13,306 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase  - 
> Consumer subtask 0 will start reading the following 7 partitions from the 
> committed group offsets in Kafka: 
> [KafkaTopicPartition{topic='aliyun_h5log_common', partition=4}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=2}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=0}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=12}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=10}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=8}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=6}]
> 2020-06-11 14:22:13,306 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase  - 
> Consumer subtask 1 will start reading the following 7 partitions from the 
> committed group offsets in Kafka: 
> [KafkaTopicPartition{topic='aliyun_h5log_common', partition=5}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=3}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=1}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=13}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=11}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=9}, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=7}]
> 2020-06-11 14:22:13,311 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase  - 
> Consumer subtask 1 creating fetcher with offsets 
> {KafkaTopicPartition{topic='aliyun_h5log_common', partition=5}=-915623761773, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=3}=-915623761773, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=1}=-915623761773, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=13}=-915623761773, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=11}=-915623761773, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=9}=-915623761773, 
> KafkaTopicPartition{topic='aliyun_h5log_common', partition=7}=-915623761773}.
> 2020-06-11 14:22:13,311 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase  - 
> Consumer subtask 0 creating fetcher with offsets 
> {KafkaTopicPartition{topic='aliyun_h5log_common', 

[jira] [Updated] (FLINK-13858) Add flink-connector-elasticsearch6 Specify the field as the primary key id

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13858:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add flink-connector-elasticsearch6 Specify the field as the primary key id
> --
>
> Key: FLINK-13858
> URL: https://issues.apache.org/jira/browse/FLINK-13858
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1, 1.9.0
>Reporter: hubin
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> For example, syncing an order table sink to es6, order_id is primary key 
>  `insert into es6_table select order_id, order_name from source_table`
> However, the primary key in es6 is randomly generated, and the order_id 
> cannot be used to find the record.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13881) CEP within method should applied in every independent pattern

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13881:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> CEP within method should applied in every independent pattern
> -
>
> Key: FLINK-13881
> URL: https://issues.apache.org/jira/browse/FLINK-13881
> Project: Flink
>  Issue Type: Improvement
>  Components: Library / CEP
>Affects Versions: 1.9.0
>Reporter: YufeiLiu
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> When I write a Pattern like this: 
> {code:java}
> Pattern.begin("start").where()
>   .followBy("middle0").where().within(Time.second(1))
>   .followBy("middle1").where().within(Time.second(2))
>   .followBy("middle2").where().within(Time.second(3))
> {code}
> the actual within time is the smallest: 1 second.
> I created a TimeCondition extends IterativeCondition, and I can get timestamp 
> of current event and previous computation state, then I compare them in 
> condition filter. Also make some change in NFACompiler, transform within as 
> StateTransition rather than a gobal property "windowTime" of NFA.
> It could work, but I dont know should I change the implementation of within 
> or create another syntax. 
> [~dawidwys] Is this meaningful? 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18180) unify the logic of time attribute derivation for both batch and streaming

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18180:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> unify the logic of time attribute derivation for both batch and streaming
> -
>
> Key: FLINK-18180
> URL: https://issues.apache.org/jira/browse/FLINK-18180
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Currently, the logic of time attribute derivation is different for batch and 
> streaming. For batch table source, the rowtime type will not be generated or 
> will be erased as regular time type if the source table has rowtime type. To 
> handle this difference, we have to distinguish batch or streaming via 
> {{isStreamingMode}} flag in many places, such as: {{DatabaseCalciteSchema}}, 
> {{CatalogSchemaTable}}, {{CatalogTableSchemaResolver}}, etc. In fact, batch 
> queries may also need rowtime type, such as supporting rowtime temporal join. 
> So we can unify the logic of time attribute derivation from the source side, 
> and erase the rowtime type if need in optimization phase. And then it's 
> easier to push the unified {{TableEnvironment}} and planner forward.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13899) Add SQL DDL for Elasticsearch 5.X version

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13899:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add SQL DDL for Elasticsearch 5.X version
> -
>
> Key: FLINK-13899
> URL: https://issues.apache.org/jira/browse/FLINK-13899
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch, Table SQL / Ecosystem
>Affects Versions: 1.9.0
>Reporter: limbo
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Hi, I need Elasticsearch 5.X verison  DDL to connect our old version 
> Elasticsearch, can I conrtribute to this feature?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18167) Flink Job hangs there when one vertex is failed and another is cancelled.

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18167:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Flink Job hangs there when one vertex is failed and another is cancelled. 
> --
>
> Key: FLINK-18167
> URL: https://issues.apache.org/jira/browse/FLINK-18167
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.10.0
>Reporter: Jeff Zhang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
> Attachments: image-2020-06-06-15-39-35-441.png
>
>
> After I call cancel with savepoint, the cancel operation is failed. The 
> following is what I see in client side. 
> {code:java}
> WARN [2020-06-06 13:45:16,003] ({Thread-1241} JobManager.java[cancelJob]:137) 
> - Fail to cancel job 7e5492f35c1a7f5dad7c805ba943ea52 that is associated with 
> paragraph paragraph_1586733868269_783581378
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> Coordinator is suspending.
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at org.apache.zeppelin.flink.JobManager.cancelJob(JobManager.java:129)
>   at 
> org.apache.zeppelin.flink.FlinkScalaInterpreter.cancel(FlinkScalaInterpreter.scala:648)
>   at 
> org.apache.zeppelin.flink.FlinkInterpreter.cancel(FlinkInterpreter.java:101)
>   at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.cancel(LazyOpenInterpreter.java:119)
>   at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.lambda$cancel$1(RemoteInterpreterServer.java:800)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> Coordinator is suspending.
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$stopWithSavepoint$9(SchedulerBase.java:873)
>   at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
>   at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>   at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> Coordinator 

[jira] [Updated] (FLINK-13837) Support --files and --libjars arguments in flink run command line

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13837:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support --files and --libjars arguments in flink run command line
> -
>
> Key: FLINK-13837
> URL: https://issues.apache.org/jira/browse/FLINK-13837
> Project: Flink
>  Issue Type: New Feature
>  Components: Command Line Client
>Reporter: Yang Wang
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Currently we could use the following codes to register a cached file and then 
> get it in the task. We hope it could be done more easier by --files command 
> option, such as —files [file:///tmp/test_data].
>  
> *final* StreamExecutionEnvironment env = 
> StreamExecutionEnvironment._getExecutionEnvironment_();
> env.registerCachedFile(inputFile.toString(), *"test_data"*, *false*);
>  
> For a jar, we could build a fat jar including our codes and all dependencies 
> . It is better to add --libjars command option to support transfer 
> dependencies.
>  
> What’s the difference between --files&—libjars and -yt?
>  * Option -yt is used when submitting job to YARN cluster, and all files will 
> be distributed by YARN distributed cache. It will be shared by all jobs in 
> the flink cluster.
>  * Option --libjars is used for flink job, and all files will be distributed 
> by blob server. It is only accessible for the specific job.
>  
> The new added command options are as follows.
> --files                       Attach custom files for job. Directory
>                                   could not be supported. Use ',' to
>                                   separate multiple files. The files
>                                   could be in local file system or
>                                   distributed file system. Use URI
>                                   schema to specify which file system
>                                   the file belongs. If schema is
>                                   missing, would try to get the file in
>                                   local file system. Use '#' after the
>                                   file path to specify retrieval key in
>                                   runtime. (eg: --file
>                                   file:///tmp/a.txt#file_key,hdfs:///$na
>                                   menode_address/tmp/b.txt)
> --libjars                    Attach custom library jars for job.
>                                   Directory could not be supported. Use
>                                   ',' to separate multiple jars. The
>                                   jars could be in local file system or
>                                   distributed file system. Use URI
>                                   schema to specify which file system
>                                   the jar belongs. If schema is missing,
>                                   would try to get the jars in local
>                                   file system. (eg: --libjars
>                                   file:///tmp/dependency1.jar,hdfs:///$n
>                                   amenode_address/tmp/dependency2.jar)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18158) Add a utility to create a DDL statement from avro schema

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18158:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add a utility to create a DDL statement from avro schema
> 
>
> Key: FLINK-18158
> URL: https://issues.apache.org/jira/browse/FLINK-18158
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> User asked if there is a way to create a TableSchema/Table originating from 
> avro schema. 
> https://lists.apache.org/thread.html/r9bd43449314230fad0b627a170db05284c9727371092fc275fc05b74%40%3Cuser.flink.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13879) It seems that there might be some problem with the DataStream.keyBy(xx).maxBy(yy)

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13879:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> It seems that there might be some problem with the 
> DataStream.keyBy(xx).maxBy(yy)
> -
>
> Key: FLINK-13879
> URL: https://issues.apache.org/jira/browse/FLINK-13879
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: liusong
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: image-2019-08-28-09-01-32-356.png, 
> image-2019-08-28-09-02-18-077.png, image-2019-08-28-09-02-47-835.png
>
>
>  following error:
> !image-2019-08-28-09-02-18-077.png!
> !image-2019-08-28-09-02-47-835.png!
> Because this 'RowTypeInfo' is subclass of 'TupleTypeInfoBase' ,so 
> 'typeInfo.isTupleType()' always return  true when the typeInfo is 
> RowTypeInfo,then enter 'else if (typeInfo.isTupleType())',but  why do it do 
> that 'TupleTypeInfo tupleTypeInfo = (TupleTypeInfo) typeInfo'?it is work when 
> i try to use 'TupleTypeInfoBase tupleTypeInfo = (TupleTypeInfoBase) typeInfo' 
> instead of it。



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13736) Support count window with blink planner in batch mode

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13736:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support count window with blink planner in batch mode
> -
>
> Key: FLINK-13736
> URL: https://issues.apache.org/jira/browse/FLINK-13736
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner, Table SQL / Runtime
>Reporter: Kurt Young
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13852) Support storing in-progress/pending files in different directories (StreamingFileSink)

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13852:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support storing in-progress/pending files in different directories 
> (StreamingFileSink)
> --
>
> Key: FLINK-13852
> URL: https://issues.apache.org/jira/browse/FLINK-13852
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem
>Reporter: Gyula Fora
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Currently in-progress and pending files are stored in the same directory as 
> the final output file. This can be problematic depending on the usage of the 
> final output files. One example would be loading the data to hive where we 
> can only load all files in a certain directory.
> I suggest we allow specifying a Pending/Inprogress base path where we create 
> the same bucketing structure as the final files to store only the non-final 
> files.
> To support this we need to extend the RecoverableWriter interface with a new 
> open method for example:
> RecoverableFsDataOutputStream open(Path path, Path tmpPath) throws 
> IOException;



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18210) The tmpdir will be clean up when stop historyserver

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18210:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> The tmpdir will be clean up when stop historyserver
> ---
>
> Key: FLINK-18210
> URL: https://issues.apache.org/jira/browse/FLINK-18210
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.2, 1.12.0
>Reporter: JieFang.He
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> The tmpdir(config by historyserver.web.tmpdir) will be clean up when stop 
> historyserver。But the directory may be shared, it is better to delete files 
> that created by historyserver。
> Part of the code to stop historyserver is shown below
> {code:java}
> try {
>LOG.info("Removing web dashboard root cache directory {}", webDir);
>FileUtils.deleteDirectory(webDir);
> } catch (Throwable t) {
>LOG.warn("Error while deleting web root directory {}", webDir, t);
> }
> {code}
>  FileUtils.deleteDirectory:
> {code:java}
> // empty the directory first
> try {
>cleanDirectoryInternal(directory);
> }
> {code}
> cleanDirectoryInternal:
> {code:java}
> // remove all files in the directory
> for (File file : files) {
>if (file != null) {
>   deleteFileOrDirectory(file);
>}
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18245) Support to parse -1 for MemorySize and Duration ConfigOption

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18245:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support to parse -1 for MemorySize and Duration ConfigOption
> 
>
> Key: FLINK-18245
> URL: https://issues.apache.org/jira/browse/FLINK-18245
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Core
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Currently, MemorySize and Duration ConfigOption doesn't support to parse 
> {{-1}} or {{-1s}}. 
> {code:java}
> java.lang.NumberFormatException: text does not start with a number
>   at 
> org.apache.flink.configuration.MemorySize.parseBytes(MemorySize.java:294)
> {code}
> That makes us can't to use {{-1}} as a disabled value, and have to use {{0}} 
> which may confuse users at some senarios. 
> There is some discussion around this topic in 
> :https://github.com/apache/flink/pull/12536#discussion_r438019632



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18249) Cannot create index by time field?

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18249:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Cannot create index by time field?
> --
>
> Key: FLINK-18249
> URL: https://issues.apache.org/jira/browse/FLINK-18249
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.10.1
>Reporter: zouwenlong
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
> Attachments: image-2020-06-11-16-22-36-912.png
>
>
> when i use "index-\{timestamp|-MM-dd}" to set index ,it not work.  I 
> write wrong or flink is not support it?
> !image-2020-06-11-16-22-36-912.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18216) "Start time" column should not expand if a job is running

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18216:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> "Start time" column should not expand if a job is running
> -
>
> Key: FLINK-18216
> URL: https://issues.apache.org/jira/browse/FLINK-18216
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.10.0
>Reporter: Chesnay Schepler
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
> Attachments: jobs_0.png, jobs_1.png
>
>
> On the cluster overview page, the "Start time" column width is different 
> depending on whether none or some jobs are running.
> The weird part is that this column _expands_ when a job is running, which 
> creates a somewhat jarring visual experience when jobs are run in quick 
> succession.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18255) Add API annotations to RocksDB user-facing classes

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18255:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Add API annotations to RocksDB user-facing classes
> --
>
> Key: FLINK-18255
> URL: https://issues.apache.org/jira/browse/FLINK-18255
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.11.0
>Reporter: Nico Kruber
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Several user-facing classes in {{flink-statebackend-rocksdb}} don't have any 
> API annotations, not even {{@PublicEvolving}}. These should be added to 
> clarify their usage.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17786: [FLINK-24889][table] Flink SQL Client should print correсtly multisets

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17786:
URL: https://github.com/apache/flink/pull/17786#issuecomment-967495270


   
   ## CI report:
   
   * 15187283b333ccabbe52773128dd11032e12934e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26483)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17787: [FLINK-24869][Build System] flink-core should be provided in flink-file-sink-common

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17787:
URL: https://github.com/apache/flink/pull/17787#issuecomment-968093205


   
   ## CI report:
   
   * 2051644234d8f215a56b3e1ba17beea7edecf97b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26482)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] snuyanzin commented on a change in pull request #17771: [FLINK-24813][table-planner]Improve ImplicitTypeConversionITCase

2021-11-13 Thread GitBox


snuyanzin commented on a change in pull request #17771:
URL: https://github.com/apache/flink/pull/17771#discussion_r748753401



##
File path: 
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/functions/BuiltInFunctionTestBase.java
##
@@ -193,7 +193,6 @@ private static void testError(
 } catch (AssertionError e) {
 throw e;
 } catch (Throwable t) {
-assertTrue(t instanceof ValidationException);

Review comment:
   Just wonder why should we stop checking for ValidationException here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17786: [FLINK-24889][table] Flink SQL Client should print correсtly multisets

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17786:
URL: https://github.com/apache/flink/pull/17786#issuecomment-967495270


   
   ## CI report:
   
   * e87646998051f2ae06d14d0efb16d45ea60d2761 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26461)
 
   * 15187283b333ccabbe52773128dd11032e12934e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26483)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17786: [FLINK-24889][table] Flink SQL Client should print correсtly multisets

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17786:
URL: https://github.com/apache/flink/pull/17786#issuecomment-967495270


   
   ## CI report:
   
   * e87646998051f2ae06d14d0efb16d45ea60d2761 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26461)
 
   * 15187283b333ccabbe52773128dd11032e12934e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] MonsterChenzhuo commented on pull request #17508: [FLINK-24351][docs] Translate "JSON Function" pages into Chinese

2021-11-13 Thread GitBox


MonsterChenzhuo commented on pull request #17508:
URL: https://github.com/apache/flink/pull/17508#issuecomment-968093761


   > 
   > 
   > @MonsterChenzhuo 感谢您的更新。 也许你应该从最新的 master 分支为你的分支 rebase 并在下次审查之前解决冲突的文件。
   @RocMarshal  I'm really sorry for the recent delay. I will deal with it as 
soon as possible tomorrow
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24869) flink-core should be provided in flink-file-sink-common

2021-11-13 Thread zhangzhanchang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443134#comment-17443134
 ] 

zhangzhanchang commented on FLINK-24869:


Hello [~trohrmann] I have open a PR this changge,Please check whether PR is 
correct

> flink-core should be provided in flink-file-sink-common
> ---
>
> Key: FLINK-24869
> URL: https://issues.apache.org/jira/browse/FLINK-24869
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.14.0
>Reporter: Konstantin Gribov
>Priority: Minor
>  Labels: pull-request-available
>
> As example {{flink-connector-files}} brings {{flink-core}} with {{compile}} 
> scope via {{flink-file-sink-common}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17787: [FLINK-24869][Build System] flink-core should be provided in flink-file-sink-common

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17787:
URL: https://github.com/apache/flink/pull/17787#issuecomment-968093205


   
   ## CI report:
   
   * 2051644234d8f215a56b3e1ba17beea7edecf97b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26482)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17787: [FLINK-24869][Build System] flink-core should be provided in flink-file-sink-common

2021-11-13 Thread GitBox


flinkbot commented on pull request #17787:
URL: https://github.com/apache/flink/pull/17787#issuecomment-968093306


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2051644234d8f215a56b3e1ba17beea7edecf97b (Sat Nov 13 
16:16:54 UTC 2021)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24869).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17787: [FLINK-24869][Build System] flink-core should be provided in flink-file-sink-common

2021-11-13 Thread GitBox


flinkbot commented on pull request #17787:
URL: https://github.com/apache/flink/pull/17787#issuecomment-968093205


   
   ## CI report:
   
   * 2051644234d8f215a56b3e1ba17beea7edecf97b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24869) flink-core should be provided in flink-file-sink-common

2021-11-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24869:
---
Labels: pull-request-available  (was: )

> flink-core should be provided in flink-file-sink-common
> ---
>
> Key: FLINK-24869
> URL: https://issues.apache.org/jira/browse/FLINK-24869
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.14.0
>Reporter: Konstantin Gribov
>Priority: Minor
>  Labels: pull-request-available
>
> As example {{flink-connector-files}} brings {{flink-core}} with {{compile}} 
> scope via {{flink-file-sink-common}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zzccctv opened a new pull request #17787: [FLINK-24869][Build System] flink-core should be provided in flink-file-sink-common

2021-11-13 Thread GitBox


zzccctv opened a new pull request #17787:
URL: https://github.com/apache/flink/pull/17787


   As example flink-connector-files brings flink-core with compile scope via 
flink-file-sink-common.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21884) Reduce TaskManager failure detection time

2021-11-13 Thread ZhuoYu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443132#comment-17443132
 ] 

ZhuoYu Chen commented on FLINK-21884:
-

[~trohrmann] {color:#33}I wonder if it is possible to establish a node 
health status detection mechanism, which is inspired by HDFS of Hadoop. 
{color}{color:#33}We can detect prefabricated abnormal events through 
regular scripts. {color}{color:#33}If certain conditions are met, the 
current node can be logged out directly.{color}

> Reduce TaskManager failure detection time
> -
>
> Key: FLINK-21884
> URL: https://issues.apache.org/jira/browse/FLINK-21884
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: reactive
> Fix For: 1.15.0
>
> Attachments: image-2021-03-19-20-10-40-324.png
>
>
> In Flink 1.13 (and older versions), TaskManager failures stall the processing 
> for a significant amount of time, even though the system gets indications for 
> the failure almost immediately through network connection losses.
> This is due to a high (default) heartbeat timeout of 50 seconds [1] to 
> accommodate for GC pauses, transient network disruptions or generally slow 
> environments (otherwise, we would unregister a healthy TaskManager).
> Such a high timeout can lead to disruptions in the processing (no processing 
> for certain periods, high latencies, buildup of consumer lag etc.). In 
> Reactive Mode (FLINK-10407), the issue surfaces on scale-down events, where 
> the loss of a TaskManager is immediately visible in the logs, but the job is 
> stuck in "FAILING" for quite a while until the TaskManger is really 
> deregistered. (Note that this issue is not that critical in a autoscaling 
> setup, because Flink can control the scale-down events and trigger them 
> proactively)
> On the attached metrics dashboard, one can see that the job has significant 
> throughput drops / consumer lags during scale down (and also CPU usage spikes 
> on processing the queued events, leading to incorrect scale up events again).
>  !image-2021-03-19-20-10-40-324.png|thumbnail!
> One idea to solve this problem is to:
> - Score TaskManagers based on certain signals (# exceptions reported, 
> exception types (connection losses, akka failures), failure frequencies,  
> ...) and blacklist them accordingly.
> - Introduce a best-effort TaskManager unregistration mechanism: When a 
> TaskManager receives a sigterm, it sends a final message to the JobManager 
> saying "goodbye", and the JobManager can immediately remove the TM from its 
> bookkeeping.
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-6573) Flink MongoDB Connector

2021-11-13 Thread ZhuoYu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443126#comment-17443126
 ] 

ZhuoYu Chen commented on FLINK-6573:


[~arvid] {color:#33}I have completed the implementation, you can assign the 
task to me, I will sort out the code, and I will submit PR {color}

> Flink MongoDB Connector
> ---
>
> Key: FLINK-6573
> URL: https://issues.apache.org/jira/browse/FLINK-6573
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Affects Versions: 1.2.0
> Environment: Linux Operating System, Mongo DB
>Reporter: Nagamallikarjuna
>Priority: Not a Priority
>  Labels: stale-assigned
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> Hi Community,
> Currently we are using Flink in the current Project. We have huge amount of 
> data to process using Flink which resides in Mongo DB. We have a requirement 
> of parallel data connectivity in between Flink and Mongo DB for both 
> reads/writes. Currently we are planning to create this connector and 
> contribute to the Community.
> I will update the further details once I receive your feedback 
> Please let us know if you have any concerns.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24456) Support bounded offset in the Kafka table connector

2021-11-13 Thread ZhuoYu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17443125#comment-17443125
 ] 

ZhuoYu Chen commented on FLINK-24456:
-

[~MartijnVisser] {color:#33}Kafka's bounded offset is used to specify the 
number of rows to extract from Kafka. For example, in SQL, join Kafka to 
specify extract 1000 rows.{color}{color:#33}Do you see if I understand 
correctly? If so, I will start implementing{color}

> Support bounded offset in the Kafka table connector
> ---
>
> Key: FLINK-24456
> URL: https://issues.apache.org/jira/browse/FLINK-24456
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Reporter: Haohui Mai
>Assignee: ZhuoYu Chen
>Priority: Minor
>
> The {{setBounded}} API in the DataStream connector of Kafka is particularly 
> useful when writing tests. Unfortunately the table connector of Kafka lacks 
> the same API.
> It would be good to have this API added.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 26e4701ca24cc8d1c46eebb6c740c406cb9e2e9b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26479)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17774: [FLINK-24611] Prevent JM from discarding state on checkpoint abortion

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17774:
URL: https://github.com/apache/flink/pull/17774#issuecomment-966933536


   
   ## CI report:
   
   * f047a04dfebb1cc10cc98c381e0fe5b271cf0c1b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17774: [FLINK-24611] Prevent JM from discarding state on checkpoint abortion

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17774:
URL: https://github.com/apache/flink/pull/17774#issuecomment-966933536


   
   ## CI report:
   
   * f3bc7747ce50b3b2da069b805490763071d804d3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26413)
 
   * f047a04dfebb1cc10cc98c381e0fe5b271cf0c1b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17774: [FLINK-24611] Prevent JM from discarding state on checkpoint abortion

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17774:
URL: https://github.com/apache/flink/pull/17774#issuecomment-966933536


   
   ## CI report:
   
   * f3bc7747ce50b3b2da069b805490763071d804d3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26413)
 
   * f047a04dfebb1cc10cc98c381e0fe5b271cf0c1b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17666: [FLINK-21327][table-planner-blink] Support window TVF in batch mode

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17666:
URL: https://github.com/apache/flink/pull/17666#issuecomment-960514675


   
   ## CI report:
   
   * 9420e4053db0f66aae31958cf0487a413016a5d3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26478)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17684: [FLINK-23999][table-planner] Support evaluating individual window table-valued function in planner

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17684:
URL: https://github.com/apache/flink/pull/17684#issuecomment-961007710


   
   ## CI report:
   
   * b6f55b35b31c5a1a8bff4c0e47ee65e83984c9b4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26477)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22626) KafkaITCase.testTimestamps fails on Azure

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22626:
---
Labels: auto-deprioritized-major stale-major test-stability  (was: 
auto-deprioritized-major test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> KafkaITCase.testTimestamps fails on Azure
> -
>
> Key: FLINK-22626
> URL: https://issues.apache.org/jira/browse/FLINK-22626
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.3, 1.13.1
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-major, stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17819=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=6708
> {code}
> Caused by: org.apache.kafka.common.protocol.types.SchemaException: Error 
> reading field 'api_keys': Error reading array of size 131096, only 50 bytes 
> available
>   at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:110)
>   at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:324)
>   at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:162)
>   at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:719)
>   at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:833)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:556)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>   at 
> org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:368)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1926)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1894)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:75)
>   at 
> org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:133)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:577)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
>   at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:428)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$2(StreamTask.java:545)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:93)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:535)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:575)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13951) Unable to call limit without sort for batch mode

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13951:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Unable to call limit without sort for batch mode
> 
>
> Key: FLINK-13951
> URL: https://issues.apache.org/jira/browse/FLINK-13951
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Here's the sample code:  tenv.sql(select * from a).fetch(n)
>  
> {code:java}
> Fail to run sql command: select * from a
> org.apache.flink.table.api.ValidationException: A limit operation must be 
> preceded by a sort operation.
>   at 
> org.apache.flink.table.operations.utils.factories.SortOperationFactory.validateAndGetChildSort(SortOperationFactory.java:117)
>   at 
> org.apache.flink.table.operations.utils.factories.SortOperationFactory.createLimitWithFetch(SortOperationFactory.java:102)
>   at 
> org.apache.flink.table.operations.utils.OperationTreeBuilder.limitWithFetch(OperationTreeBuilder.java:388)
>   at org.apache.flink.table.api.internal.TableImpl.fetch(TableImpl.java:406) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14030) IS_NULL is optimized to incorrect results

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14030:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> IS_NULL is optimized to incorrect results
> -
>
> Key: FLINK-14030
> URL: https://issues.apache.org/jira/browse/FLINK-14030
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0
>Reporter: Leonard Xu
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> *testAllApis()* unit tests will run fail because planner make a conversion
>  from *[ifThenElse(isNull(plus(f0, f1)), 'null', 'not null')]*
>  to *[CASE(OR(IS NULL($0), IS NULL($1)), _UTF-16LE'null', _UTF-16LE'not 
> null')]*
>  which is not a equivalence conversion. The result of expression 'f0 + 'f1 
> should be null
>  when the result overflows even if its two operands both are not null.
> It's easy to reproduce as following:
>  testAllApis(
>  'f0 + 'f1,
>  "f1 + f1",
>  "f1 + f1",
>  "null")// the result should be null because overflow
> override def testData: Row =
> { val testData = new Row(2) testData.setField(0, 
> BigDecimal("1e10").bigDecimal) testData.setField(1, 
> BigDecimal("0").bigDecimal) testData }
> override def typeInfo: RowTypeInfo =
> { new RowTypeInfo( /* 0 */ fromLogicalTypeToTypeInfo(DECIMAL(38, 10)), /* 1 
> */ fromLogicalTypeToTypeInfo(DECIMAL(38, 28)) ) }
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14002) FlinkKafkaProducer constructor that takes KafkaSerializationSchema shouldnt take default topic

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14002:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> FlinkKafkaProducer constructor that takes KafkaSerializationSchema shouldnt 
> take default topic
> --
>
> Key: FLINK-14002
> URL: https://issues.apache.org/jira/browse/FLINK-14002
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Reporter: Gyula Fora
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> When the KafkaSerializationSchema is used the user has the to provide the 
> topic always when they create the ProducerRecord.
> The defaultTopic specified in the constructor (and enforced not to be null) 
> will always be ignored, this is very misleading.
> We should depracate these constructors and create new ones without 
> defaultTopic.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13944) Table.toAppendStream: InvalidProgramException: Table program cannot be compiled.

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-13944:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Table.toAppendStream: InvalidProgramException: Table program cannot be 
> compiled.
> 
>
> Key: FLINK-13944
> URL: https://issues.apache.org/jira/browse/FLINK-13944
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala, Table SQL / API
>Affects Versions: 1.8.1, 1.9.0
> Environment: {code:bash}
> $ java -version
> openjdk version "1.8.0_222"
> OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~18.04.1-b10
> OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode
> {code}
> {{--}}
> {code:bash}
> $ scala -version
> Scala code runner version 2.11.12 -- Copyright 2002-2017, LAMP/EPFL
> {code}
> {{--}}
> {{build.}}{{sbt}}
> [...]
> ThisBuild / scalaVersion := "2.11.12"
> val flinkVersion = "1.9.0"
> val flinkDependencies = Seq(
>  "org.apache.flink" %% "flink-scala" % flinkVersion % "provided",
>  "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided",
>  "org.apache.flink" %% "flink-table-planner" % flinkVersion % "provided")
> [...]
>  
>Reporter: Stefano
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: app.zip
>
>
> (The project in which I face the error is attached.)
> {{Using: Scala streaming API and the StreamTableEnvironment.}}
> {{Given the classes:}}
> {code:scala}
> object EntityType extends Enumeration {
>   type EntityType = Value
>   val ACTIVITY = Value
>  }
> sealed trait Entity extends Serializable
> case class Activity(card_id: Long, date_time: Timestamp, second: Long, 
> station_id: Long, station_name: String, activity_code: Long, amount: Long) 
> extends Entity
> {code}
> What I try to do to convert a table after selection to an appendStream:
> {code:scala}
> /** activity table **/
> val activityDataStream = partialComputation1
>   .filter(_._1 == EntityType.ACTIVITY)
>   .map(x => x._3.asInstanceOf[Activity])
> tableEnv.registerDataStream("activity", activityDataStream, 'card_id, 
> 'date_time, 'second, 'station_id, 'station_name, 'activity_code, 'amount)
> val selectedTable = tableEnv.scan("activity").select("card_id, second")
> selectedTable.printSchema()
> // root
> //   |-- card_id: BIGINT
> //   |-- second: BIGINT
> // ATTEMPT 1
> //val output = tableEnv.toAppendStream[(Long, Long)](selectedTable)
> //output.print
> // ATTEMPT 2
> //val output = tableEnv.toAppendStream[(java.lang.Long, 
> java.lang.Long)](selectedTable)
> //output.print
> // ATTEMPT 3
> //val output = tableEnv.toAppendStream[Row](selectedTable)
> //output.print
> // ATTEMPT 4
> case class Test(card_id: Long, second: Long) extends Entity
> val output = tableEnv.toAppendStream[Test](selectedTable)
> output.print
> {code}
> In any of the attempts the error I get is always the same:
> {code:bash}
> $ flink run target/scala-2.11/app-assembly-0.1.jar 
> Starting execution of program
> root
>  |-- card_id: BIGINT
>  |-- second: BIGINT
> 
>  The program finished with the following exception:
> org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: 9954823e0b55a8140f78be6868c85399)
>   at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:262)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:654)
>   at bds_comparison.flink.metrocard.App$.main(App.scala:141)
>   at bds_comparison.flink.metrocard.App.main(App.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> 

[jira] [Updated] (FLINK-14141) Locate Jobs/SubTask/Vertex Running on TaskManager

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14141:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Locate Jobs/SubTask/Vertex Running on TaskManager
> -
>
> Key: FLINK-14141
> URL: https://issues.apache.org/jira/browse/FLINK-14141
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Yadong Xie
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: 屏幕快照 2019-09-20 下午3.20.05.png
>
>
> As we know, the subtask associated with job vertex is running on task 
> manager, but in current design in Web UI, there is no way to get the 
> associated jobs, vertex or subtask information from the task manager.
> We should add a Job List tab in the task manager page, giving a list of 
> job/vertex/subtask associated with the current task manager, with FLINK-13894 
> users can link subtask and taskmanger together when troubleshooting. Here is 
> a demo below.
> !屏幕快照 2019-09-20 下午3.20.05.png|width=602,height=240!
> REST API needed:
> add /taskmanagers/:taskmanagerid/jobs API to get associated jobs with current 
> taskmanager.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18354) when use ParquetAvroWriters.forGenericRecord(Schema schema) error java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot be cast to org.apache.avro

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18354:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> when use ParquetAvroWriters.forGenericRecord(Schema schema) error  
> java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 cannot 
> be cast to org.apache.avro.generic.IndexedRecord
> ---
>
> Key: FLINK-18354
> URL: https://issues.apache.org/jira/browse/FLINK-18354
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: 1.10.0
>Reporter: Yangyingbo
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> {code:java}
>  {code}
> when i use ParquetAvroWriters.forGenericRecord(Schema schema) write data to 
> parquet ,it has occur some error:
> mycode:
>  
> {code:java}
> // 
> //transfor 2 dataStream
>  // TupleTypeInfo tupleTypeInfo = new TupleTypeInfo(GenericData.Record.class, 
> BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
>  TupleTypeInfo tupleTypeInfo = new 
> TupleTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
>  DataStream testDataStream = flinkTableEnv.toAppendStream(test, 
> tupleTypeInfo);
>  testDataStream.print().setParallelism(1);
> ArrayList fields = new 
> ArrayList();
>  fields.add(new org.apache.avro.Schema.Field("id", 
> org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "id", 
> JsonProperties.NULL_VALUE));
>  fields.add(new org.apache.avro.Schema.Field("time", 
> org.apache.avro.Schema.create(org.apache.avro.Schema.Type.STRING), "time", 
> JsonProperties.NULL_VALUE));
>  org.apache.avro.Schema parquetSinkSchema = 
> org.apache.avro.Schema.createRecord("pi", "flinkParquetSink", 
> "flink.parquet", true, fields);
>  String fileSinkPath = "./xxx.text/rs6/";
> StreamingFileSink parquetSink = StreamingFileSink.
>  forBulkFormat(new Path(fileSinkPath),
>  ParquetAvroWriters.forGenericRecord(parquetSinkSchema))
>  .withRollingPolicy(OnCheckpointRollingPolicy.build())
>  .build();
>  testDataStream.addSink(parquetSink).setParallelism(1);
>  flinkTableEnv.execute("ReadFromKafkaConnectorWriteToLocalFileJava");
>  {code}
> and this error:
> {code:java}
> // code placeholder
> 09:29:50,283 INFO  org.apache.flink.runtime.taskmanager.Task                  
>    - Sink: Unnamed (1/1) (79505cb6ab2df38886663fd99461315a) switched from 
> RUNNING to FAILED.09:29:50,283 INFO  
> org.apache.flink.runtime.taskmanager.Task                     - Sink: Unnamed 
> (1/1) (79505cb6ab2df38886663fd99461315a) switched from RUNNING to 
> FAILED.java.lang.ClassCastException: org.apache.flink.api.java.tuple.Tuple2 
> cannot be cast to org.apache.avro.generic.IndexedRecord at 
> org.apache.avro.generic.GenericData.getField(GenericData.java:697) at 
> org.apache.parquet.avro.AvroWriteSupport.writeRecordFields(AvroWriteSupport.java:188)
>  at org.apache.parquet.avro.AvroWriteSupport.write(AvroWriteSupport.java:165) 
> at 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:128)
>  at org.apache.parquet.hadoop.ParquetWriter.write(ParquetWriter.java:299) at 
> org.apache.flink.formats.parquet.ParquetBulkWriter.addElement(ParquetBulkWriter.java:52)
>  at 
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkPartWriter.write(BulkPartWriter.java:50)
>  at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:214)
>  at 
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:274)
>  at 
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:445)
>  at 
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
>  at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:173)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
>  at 
> 

[jira] [Updated] (FLINK-14057) Add Remove Other Timers to TimerService

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14057:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add Remove Other Timers to TimerService
> ---
>
> Key: FLINK-14057
> URL: https://issues.apache.org/jira/browse/FLINK-14057
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Jesse Anderson
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> The TimerService service has the ability to add timers with 
> registerProcessingTimeTimer. This method can be called many times and have 
> different timer times.
> If you want to add a new timer and delete other timers, you have to keep 
> track of all previous timer times and call deleteProcessingTimeTimer for each 
> time. This method forces you to keep track of all previous (unexpired) timers 
> for a key.
> Instead, I suggest overloading registerProcessingTimeTimer with a second 
> boolean argument that will remove all previous timers and set the new timer.
> Note: although I'm using registerProcessingTimeTimer, this applies to 
> registerEventTimeTimer as well.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14136) Operator Topology and Metrics Inside Vertex

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14136:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Operator Topology and Metrics Inside Vertex
> ---
>
> Key: FLINK-14136
> URL: https://issues.apache.org/jira/browse/FLINK-14136
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Yadong Xie
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: Kapture 2019-09-17 at 14.31.46.gif, screenshot.png, 
> screenshot2.png
>
>
> In the screenshot below, users can get vertex topology data in the job detail 
> page, but the operator topology and metrics inside vertex is missing in the 
> graph.
> !screenshot.png|width=477,height=206!
> There are actually two operators in the first vertex, their names are Source: 
> Custom Source and Timestamps/Watermarks, but users can only see Source: 
> Custom Source -> Timestamps/Watermarks in the vertex level.
> We can already get some metrics at the operator-level such as records-in and 
> records-out from the metrics REST API (in the screenshot below).
> !screenshot2.png|width=475,height=210!
> If we can get the operators’ topology data inside a vertex, users can the 
> whole operator topology with record-received and records-sent information 
> after a glance at the graph, we think it would be quite useful to 
> troubleshoot jobs’ problem when it is running. Here is a demo in the gif 
> below.
> !Kapture 2019-09-17 at 14.31.46.gif|width=563,height=286!
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18478) AvroDeserializationSchema does not work with types generated by avrohugger

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18478:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available  (was: auto-deprioritized-major pull-request-available 
stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> AvroDeserializationSchema does not work with types generated by avrohugger
> --
>
> Key: FLINK-18478
> URL: https://issues.apache.org/jira/browse/FLINK-18478
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Aljoscha Krettek
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> The main problem is that the code in {{SpecificData.createSchema()}} tries to 
> reflectively read the {{SCHEMA$}} field, that is normally there in Avro 
> generated classes. However, avrohugger generates this field in a companion 
> object, which the reflective Java code will therefore not find.
> This is also described in these ML threads:
>  * 
> [https://lists.apache.org/thread.html/5db58c7d15e4e9aaa515f935be3b342fe036e97d32e1fb0f0d1797ee@%3Cuser.flink.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/cf1c5b8fa7f095739438807de9f2497e04ffe55237c5dea83355112d@%3Cuser.flink.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14039) Flink Kinesis consumer: configurable per-shard consumption rate when running in adaptive mode

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14039:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Flink Kinesis consumer: configurable per-shard consumption rate when running 
> in adaptive mode
> -
>
> Key: FLINK-14039
> URL: https://issues.apache.org/jira/browse/FLINK-14039
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Ying Xu
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
>
> Currently, Flink kinesis connector has a fixed 
> [2MB|https://github.com/apache/flink/blob/78748ea1aee8f9d0c0499180a2ef455490b32b24/flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/ShardConsumer.java#L59-L61]
>  target rate (per-shard) when running in adaptive rate mode.  In specific 
> scenarios, it is desirable that users would want a different target rate. For 
> example, when two Kinesis consumers share a common stream, the user may want 
> to de-prioritize one stream such that it runs with a target rate < 2MB. 
> It is relatively straightforward to implement this feature – simply add a 
> per-shard target rate consumer config and has the default set to 2MB. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18357) ContinuousFileReaderOperator checkpoint timeout when files and directories get larger

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18357:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> ContinuousFileReaderOperator checkpoint timeout when files and directories 
> get larger
> -
>
> Key: FLINK-18357
> URL: https://issues.apache.org/jira/browse/FLINK-18357
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> This is reported from user-zh, I tranlsated it into this issue: 
> http://apache-flink.147419.n8.nabble.com/env-readFile-td4017.html
> {{env.readFile(format,path, FileProcessingMode.PROCESS_CONTINUOUSLY, 6)}}
> It monitors a directory of A, we will generate a new directory under A every 
> day, such as:
> A/20200101/
> A/20200102/
> A/20200103/
> ...
> ...
> However, as time goes on, it has to monitor 200 directories and 500 files in 
> each directory until June. Then everytime, the checkpoint has to synchronize 
> offsets of 200*500 files which is very large and checkpoint gets timeout. 
> 
> My though is that is it possible to have a configuration, to mark a 
> sub-directory as idle after a inactive interval, then we don't need to 
> checkpoint all the file offsets under it. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14142) Add more metrics to task manager list

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14142:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Add more metrics to task manager list
> -
>
> Key: FLINK-14142
> URL: https://issues.apache.org/jira/browse/FLINK-14142
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Yadong Xie
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: 屏幕快照 2019-09-20 下午3.31.58.png
>
>
> In the task manager list page, besides the free slots and all slots, we could 
> add memory and CPU usage metrics to the taskmanager, these metrics are 
> already available in blink.
> !屏幕快照 2019-09-20 下午3.31.58.png|width=619,height=266!
>  
> REST API needed:
> add CPU and memory usage metrics in the /taskmanagers API



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18489) java.lang.ArrayIndexOutOfBoundsException from DataOutputSerializer.write

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18489:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> java.lang.ArrayIndexOutOfBoundsException from DataOutputSerializer.write
> 
>
> Key: FLINK-18489
> URL: https://issues.apache.org/jira/browse/FLINK-18489
> Project: Flink
>  Issue Type: Bug
>  Components: API / Scala
>Affects Versions: 1.10.0
> Environment: {code:java}
> OS current user: yarn
> Current Hadoop/Kerberos user: hadoop
> JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.141-b15
> Maximum heap size: 28960 MiBytes
> JAVA_HOME: /usr/java/jdk1.8.0_141/jre
> Hadoop version: 2.8.5-amzn-6
> JVM Options:
>-Xmx30360049728
>-Xms30360049728
>-XX:MaxDirectMemorySize=4429185024
>-XX:MaxMetaspaceSize=1073741824
>-XX:+UseG1GC
>-XX:+UnlockDiagnosticVMOptions
>-XX:+G1SummarizeConcMark
>-verbose:gc
>-XX:+PrintGCDetails
>-XX:+PrintGCDateStamps
>-XX:+UnlockCommercialFeatures
>-XX:+FlightRecorder
>-XX:+DebugNonSafepoints
>
> -XX:FlightRecorderOptions=defaultrecording=true,settings=/home/hadoop/heap.jfc,dumponexit=true,dumponexitpath=/var/lib/hadoop-yarn/recording.jfr,loglevel=info
>
> -Dlog.file=/var/log/hadoop-yarn/containers/application_1593935560662_0002/container_1593935560662_0002_01_02/taskmanager.log
>-Dlog4j.configuration=file:./log4j.properties
> Program Arguments:
>-Dtaskmanager.memory.framework.off-heap.size=134217728b
>-Dtaskmanager.memory.network.max=1073741824b
>-Dtaskmanager.memory.network.min=1073741824b
>-Dtaskmanager.memory.framework.heap.size=134217728b
>-Dtaskmanager.memory.managed.size=23192823744b
>-Dtaskmanager.cpu.cores=7.0
>-Dtaskmanager.memory.task.heap.size=30225832000b
>-Dtaskmanager.memory.task.off-heap.size=3221225472b
>--configDir.
>
> -Djobmanager.rpc.address=ip-10-180-30-250.us-west-2.compute.internal-Dweb.port=0
>-Dweb.tmpdir=/tmp/flink-web-64f613cf-bf04-4a09-8c14-75c31b619574
>-Djobmanager.rpc.port=33739
>-Drest.address=ip-10-180-30-250.us-west-2.compute.internal
> {code}
>Reporter: Ori Popowski
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Getting {{java.lang.ArrayIndexOutOfBoundsException}} with the following 
> stacktrace:
> {code:java}
> 2020-07-05 18:25:04
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
>   at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
>   at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
>   at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>   at 

[jira] [Updated] (FLINK-18418) document example error

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18418:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> document example error
> --
>
> Key: FLINK-18418
> URL: https://issues.apache.org/jira/browse/FLINK-18418
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet, Documentation
>Reporter: appleyuchi
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> OuterJoin with Flat-Join Function
> [https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/dataset_transformations.html]
> change
>  
> *public void join(Tuple2 movie, Rating rating*
> to
> *public void join(Tuple2 movie, Rating rating,*
>  
> please.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14138) Show Pending Slots in Job Detail

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14138:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Show Pending Slots in Job Detail
> 
>
> Key: FLINK-14138
> URL: https://issues.apache.org/jira/browse/FLINK-14138
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: Yadong Xie
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: 屏幕快照 2019-09-20 下午12.04.00.png, 屏幕快照 2019-09-20 
> 下午12.04.05.png
>
>
> It is hard to troubleshoot when all subtasks are always on the SCHEDULED 
> status(just like the screenshot below) when users submit a job.
> !屏幕快照 2019-09-20 下午12.04.00.png|width=494,height=258!
> The most common reason for this problem is that vertex has applied for more 
> resources than the cluster has. A pending slots tab could help users to check 
> which vertex or subtask is blocked.
> !屏幕快照 2019-09-20 下午12.04.05.png|width=576,height=163!
>  
> REST API needed:
> add /jobs/:jobid/pending-slots API to get pending slots data.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-21325) NoResourceAvailableException while cancel then resubmit jobs or after running tow days

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21325:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> NoResourceAvailableException while cancel then resubmit  jobs or after 
> running tow days
> ---
>
> Key: FLINK-21325
> URL: https://issues.apache.org/jira/browse/FLINK-21325
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Runtime / Coordination
> Environment: FLINK 1.12  with 
> [flink-kubernetes_2.11-1.12-SNAPSHOT.jar] in libs directory to fix FLINK 
> restart problem on k8s HA session mode.
>Reporter: hayden zhou
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
> Attachments: clear.log
>
>
>  I have five stream jobs and want to clear all states in jobs, so I canceled 
> all those jobs, then resubmitted one by one, resulting in two jobs are in 
> running status,  while three jobs are in created status with errors 
> ".NoResourceAvailableException: Slot request bulk is not fulfillable! Could 
> not allocate the required slot within slot request timeout
> "
> I am sure my slots are sufficient.
> it always can't submit batch stream after almost two days running normally.
> and this problem were fixed by restart k8s jm an tm pods.
> below is the error logs:
> {code:java}
> ava.util.concurrent.CompletionException: 
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: 
> Slot request bulk is not fulfillable! Could not allocate the required slot 
> within slot request timeout
>  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
>  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
>  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
>  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
>  at 
> org.apache.flink.runtime.scheduler.SharedSlot.cancelLogicalSlotRequest(SharedSlot.java:195)
>  at 
> org.apache.flink.runtime.scheduler.SlotSharingExecutionSlotAllocator.cancelLogicalSlotRequest(SlotSharingExecutionSlotAllocator.java:147)
>  at 
> org.apache.flink.runtime.scheduler.SharingPhysicalSlotRequestBulk.cancel(SharingPhysicalSlotRequestBulk.java:84)
>  at 
> org.apache.flink.runtime.jobmaster.slotpool.PhysicalSlotRequestBulkWithTimestamp.cancel(PhysicalSlotRequestBulkWithTimestamp.java:66)
>  at 
> org.apache.flink.runtime.jobmaster.slotpool.PhysicalSlotRequestBulkCheckerImpl.lambda$schedulePendingRequestBulkWithTimestampCheck$0(PhysicalSlotRequestBulkCheckerImpl.java:87)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:404)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:197)
>  at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
>  at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
>  at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>  at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
>  at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
>  at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
>  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
>  at akka.actor.ActorCell.invoke(ActorCell.scala:561)
>  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
>  at akka.dispatch.Mailbox.run(Mailbox.scala:225)
>  at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
>  at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>  at 
> 

[jira] [Updated] (FLINK-18523) Advance watermark if there is no data in all of the partitions after some time

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18523:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Advance watermark if there is no data in all of the partitions after some time
> --
>
> Key: FLINK-18523
> URL: https://issues.apache.org/jira/browse/FLINK-18523
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: chen yong
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> In the case of window calculations and eventTime scenarios, watermark cannot 
> update because the source does not have data for some reason, and the last 
> Windows cannot trigger the calculations.
> One parameter, table.exec.source. Idle -timeout, can only solve the problem 
> of ignoring parallelism of watermark alignment that does not occur.But when 
> there is no watermark in each parallel degree, you still cannot update the 
> watermark.
> Is it possible to add a lock-timeout parameter (which should be larger than 
> maxOutOfOrderness with default of "-1 ms") and if the watermark is not 
> updated beyond this time (i.e., there is no data), then the current time is 
> taken and sent downstream as the watermark.
>  
> thanks!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18445) Short circuit join condition for lookup join

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18445:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Short circuit join condition for lookup join
> 
>
> Key: FLINK-18445
> URL: https://issues.apache.org/jira/browse/FLINK-18445
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Rui Li
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Consider the following query:
> {code}
> select *
> from probe
> left join
> build for system_time as of probe.ts
> on probe.key=build.key and probe.col is not null
> {code}
> In current implementation, we lookup each probe.key in build to decide 
> whether a match is found. A possible optimization is to skip the lookup for 
> rows whose {{col}} is null.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18476) PythonEnvUtilsTest#testStartPythonProcess fails locally

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18476:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor test-stability  
(was: auto-deprioritized-major stale-minor test-stability)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> PythonEnvUtilsTest#testStartPythonProcess fails locally 
> 
>
> Key: FLINK-18476
> URL: https://issues.apache.org/jira/browse/FLINK-18476
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> The 
> {{org.apache.flink.client.python.PythonEnvUtilsTest#testStartPythonProcess}} 
> failed in my local environment as it assumes the environment has 
> {{/usr/bin/python}}. 
> I don't know exactly how did I get python in Ubuntu 20.04, but I have only 
> alias for {{python = python3}}. Therefore the tests fails.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-14147) Reduce REST API Request/Response redundancy

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-14147:
---
Labels: auto-deprioritized-major stale-minor  (was: 
auto-deprioritized-major)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


>  Reduce REST API Request/Response redundancy
> 
>
> Key: FLINK-14147
> URL: https://issues.apache.org/jira/browse/FLINK-14147
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Reporter: Yadong Xie
>Priority: Minor
>  Labels: auto-deprioritized-major, stale-minor
> Attachments: 16_17_07__09_20_2019.jpg
>
>
> 1. Redundancy Response
> In the response of API /jobs/:jobid, the id and name in both plan and the 
> vertices data are exactly the same, it would waste a lot of network bandwidth 
> if the vertex graph is very big(1000+ vertex in a job).
> !16_17_07__09_20_2019.jpg|width=427,height=279!
> 2. Redundancy Request
>  In the current Web UI design, we have to query vertex number times to 
> display the low watermarks in the job graph. If the vertex number is very 
> large(1000+ sometimes), the Web UI will send 1000+ request to the REST API, 
> the max concurrent HTTP request in a browser is limited, it would bring a 
> long time delay for users. Test it, if there are more than 165 WarterMarks to 
> call will redirect to /bad-request, then 404.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-18379) Introduce asynchronous UDF/UDTF

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18379:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Introduce asynchronous UDF/UDTF
> ---
>
> Key: FLINK-18379
> URL: https://issues.apache.org/jira/browse/FLINK-18379
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Benchao Li
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Currently we have Async IO in DataStream API, and async temporal table. There 
> is some cases that we want to have async capability for UDF/UDTF, to do some 
> external IO.
> And based on async capability, users can build there own mini batch for 
> UDF/UDTF.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24186) Disable single rowtime column check for collect/print

2021-11-13 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24186:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Disable single rowtime column check for collect/print
> -
>
> Key: FLINK-24186
> URL: https://issues.apache.org/jira/browse/FLINK-24186
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> As seen in FLINK-23751, the single rowtime column check can occur also during 
> collecting and printing which is not important there as watermarks as not 
> used.
> The exception is also misleading as it references a {{DataStream}}:
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.table.api.TableException: Found more than one rowtime field: 
> [bidtime, window_time] in the query when insert into 
> 'default_catalog.default_database.Unregistered_Collect_Sink_8'.
> Please select the rowtime field that should be used as event-time timestamp 
> for the DataStream by casting all other fields to TIMESTAMP.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 7d400bdab50aeecfc400cdd193633e2b50c4babc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26458)
 
   * 26e4701ca24cc8d1c46eebb6c740c406cb9e2e9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26479)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-926109717


   
   ## CI report:
   
   * 7d400bdab50aeecfc400cdd193633e2b50c4babc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26458)
 
   * 26e4701ca24cc8d1c46eebb6c740c406cb9e2e9b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] CrynetLogistics commented on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-13 Thread GitBox


CrynetLogistics commented on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-968009671


   @dannycranmer Please let me know.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] CrynetLogistics commented on pull request #17345: [FLINK-24227][connectors/kinesis] Added Kinesis Data Streams Sink i…

2021-11-13 Thread GitBox


CrynetLogistics commented on pull request #17345:
URL: https://github.com/apache/flink/pull/17345#issuecomment-968009252


   Squashed & Rebased. Did some more refactoring to remove 1 of the 3 v1 
dependencies (the sts one).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24887) Retrying savepoints may cause early cluster shutdown

2021-11-13 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-24887.

Resolution: Fixed

master:
6b9c1ac9c6d4d89f961612672ece326e8b9cb02d
a66a876126b2f702fa224be534aca4c729dd6f8a

> Retrying savepoints may cause early cluster shutdown
> 
>
> Key: FLINK-24887
> URL: https://issues.apache.org/jira/browse/FLINK-24887
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.15.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> If an operation is retried we potentially access the result of a previous 
> attempt to see if it has already failed and eagerly fail the trigger request. 
> If that attempt is already complete then this may lead to an unexpected 
> shutdown of the cluster.
> Beyond this issue, the eager checking of previous attempts makes error 
> handling more complicated, because you have to cover all cases for both the 
> trigger and status-retrieval operations.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] zentol merged pull request #17778: [FLINK-24887][rest] Triggers do not check previous result

2021-11-13 Thread GitBox


zentol merged pull request #17778:
URL: https://github.com/apache/flink/pull/17778


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17777: [FLINK-24886][core] TimeUtils supports the form of m.

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #1:
URL: https://github.com/apache/flink/pull/1#issuecomment-966971402


   
   ## CI report:
   
   * 405416d6b36b90cdfebc094af887eecd0a6707fb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26475)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17698: [FLINK-24689][runtime-web] Add log's last modify time in log list view

2021-11-13 Thread GitBox


flinkbot edited a comment on pull request #17698:
URL: https://github.com/apache/flink/pull/17698#issuecomment-961944542


   
   ## CI report:
   
   * 06aa54e09e094c146ed036647fdcd82497623a35 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26474)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org