[jira] [Commented] (FLINK-18641) "Failure to finalize checkpoint" error in MasterTriggerRestoreHook

2020-07-22 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163267#comment-17163267
 ] 

Jiangjie Qin commented on FLINK-18641:
--

[~pnowojski] [~SleePy] There are actually more problems than the issues 
reported by this ticket. In checkpoint there are two orders:
 # the checkpoint taking order;
 # the checkpoint acknowledge handling order.

This ticket reports the problem of unexpected checkpoint acknowledge handling 
order. While the acknowledge handling order is not super important, the 
checkpoint taking order is critical because that may cause correctness problem.

By design, the checkpoint should always actually take the snapshot of the 
master hooks and OperatorCoordinator first before taking the checkpoint on the 
tasks.

Prior to FLIP-27, this ordering is always guaranteed because the checkpoint on 
the tasks are triggered in one of the following ways:
 # The CheckpointCoordinator takes snapshot of the master hooks, wait for it to 
finish, and then trigger the snapshot on tasks. (Strict snapshot order is 
guaranteed by the checkpoint coordinator.)
 # The CheckpointCoordinator takes snapshot of the master hooks. The master 
hook triggers the checkpoint on the tasks using ExternallyInducedSource. The 
CheckpointCoordinator waits until the master hooks finishes snapshot. (In this 
case, the master hooks have to guarantee the consistency between the master 
state and the task states. The tasks may ack the checkpoint before the master 
hooks completes, but the acks won't be handled until the master hooks acks are 
handled).

After FLIP-27, with the introduction of OperatorCoordinator, we need to make 
sure that the tasks checkpoint cannot be triggered before the 
OperatorCoordinator checkpoint. Given that the task checkpoint can be triggered 
by both the CheckpointCoordinator itself or the master hooks. We should first 
checkpoint the OperatorCoordinator, then the master hooks, lastly triggering 
the task checkpoint.

The current code does not do this correctly. That means the master hooks can 
trigger the snapshot of the tasks before the OperatorCoordinators are 
checkpointed.

Regarding the fix, I am thinking of the following:
 # Change the checkpoint order to checkpoint OperatorCoordinators first, i.e 
before checkpointing master hooks.
 # Keep the master hooks async, but invoke them after checkpointing the 
OperatorCoordinators. This is fine because according to the java doc of 
{{MasterTriggerRestoreHook#triggerCheckpoint()}} can choose to be synchronous 
if it wants to. Otherwise the CheckpointCoordinator will consider it as 
asynchronous friendly.
 # Change the logic to only call 
{{CheckpointCoordinator#completePendingCheckpoint()}} after all the checkpoints 
are acked, instead of only looking the task checkpoints. This is because the 
acknowledge handling order may vary given the current async checkpoint pattern.

I agree that In long run, the operator coordinator can actually supersede the 
master hooks. So we can probably mark the master hooks as deprecated.

> "Failure to finalize checkpoint" error in MasterTriggerRestoreHook
> --
>
> Key: FLINK-18641
> URL: https://issues.apache.org/jira/browse/FLINK-18641
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Brian Zhou
>Priority: Major
>
> https://github.com/pravega/flink-connectors is a Pravega connector for Flink. 
> The ReaderCheckpointHook[1] class uses the Flink `MasterTriggerRestoreHook` 
> interface to trigger the Pravega checkpoint during Flink checkpoints to make 
> sure the data recovery. The checkpoint recovery tests are running fine in 
> Flink 1.10, but it has below issues in Flink 1.11 causing the tests time out. 
> Suspect it is related to the checkpoint coordinator thread model changes in 
> Flink 1.11
> Error stacktrace:
> {code}
> 2020-07-09 15:39:39,999 30945 [jobmanager-future-thread-5] WARN  
> o.a.f.runtime.jobmaster.JobMaster - Error while processing checkpoint 
> acknowledgement message
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize 
> the pending checkpoint 3. Failure reason: Failure to finalize checkpoint.
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1033)
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:948)
>  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$4(SchedulerBase.java:802)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 

[jira] [Commented] (FLINK-18262) PyFlink end-to-end test stalls

2020-07-22 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163265#comment-17163265
 ] 

Robert Metzger commented on FLINK-18262:


Thanks a lot for your comment. I agree that we should only spend more effort on 
this issue if we see more failure cases.

Regarding the broken Azure links: By default, the retention of failed builds is 
configured to be quite short in Azure. Somewhere in the settings, you can 
increase the retention time, so that old builds are stored longer.

> PyFlink end-to-end test stalls
> --
>
> Key: FLINK-18262
> URL: https://issues.apache.org/jira/browse/FLINK-18262
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3299&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}
> 2020-06-11T17:25:37.2919869Z We now have 2 NodeManagers up.
> 2020-06-11T18:22:11.3398295Z ##[error]The operation was canceled.
> 2020-06-11T18:22:11.3411819Z ##[section]Finishing: Run e2e tests
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16085) Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-07-22 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16085.
--
Fix Version/s: 1.12.0
   Resolution: Fixed

Merged in 
https://github.com/apache/flink/commit/7883c78ea0df54216c2490bb67badb2d8611949c

Thanks a lot for your help [~Authuir]!

> Translate "Joins in Continuous Queries" page of "Streaming Concepts" into 
> Chinese 
> --
>
> Key: FLINK-16085
> URL: https://issues.apache.org/jira/browse/FLINK-16085
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Authuir
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/joins.html
> The markdown file is located in {{flink/docs/dev/table/streaming/joins.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger closed pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-07-22 Thread GitBox


rmetzger closed pull request #12420:
URL: https://github.com/apache/flink/pull/12420


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #12721: [hotfix] Fix the table document errors

2020-07-22 Thread GitBox


JingsongLi merged pull request #12721:
URL: https://github.com/apache/flink/pull/12721


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-07-22 Thread GitBox


rmetzger commented on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-662843164


   Thanks for the review & contribution. I will merge the PR now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12965: [FLINK-18552][tests] Update migration tests in master to cover migration till release-1.11

2020-07-22 Thread GitBox


flinkbot commented on pull request #12965:
URL: https://github.com/apache/flink/pull/12965#issuecomment-662841389


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5366de911c86f37b837e8ebf2367d6d9ae5e5905 (Thu Jul 23 
06:32:57 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18552) Update migration tests in master to cover migration from release-1.11

2020-07-22 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-18552:

Description: 
We should update the following tests to cover migration from release-1.11:
 * {{CEPMigrationTest}}
 * {{BucketingSinkMigrationTest}}
 * {{FlinkKafkaConsumerBaseMigrationTest}}
 * {{ContinuousFileProcessingMigrationTest}}
 * {{WindowOperatorMigrationTest}}
 * {{StatefulJobSavepointMigrationITCase.scala}}
 * {{StatefulJobWBroadcastStateMigrationITCase.scala}}

 

Refers to https://issues.apache.org/jira/browse/FLINK-13613, there are more 
migration tests requires to update:
 
* FlinkKafkaProducer011MigrationTest
* FlinkKafkaProducerMigrationOperatorTest
* FlinkKafkaProducerMigrationTest
* StatefulJobSavepointMigrationITCase
* StatefulJobWBroadcastStateMigrationITCase
* TypeSerializerSnapshotMigrationITCase
* AbstractKeyedOperatorRestoreTestBase
* AbstractNonKeyedOperatorRestoreTestBase
* FlinkKinesisConsumerMigrationTest 
 


 

 

  was:
We should update the following tests to cover migration from release-1.11:
 * {{CEPMigrationTest}}
 * {{BucketingSinkMigrationTest}}
 * {{FlinkKafkaConsumerBaseMigrationTest}}
 * {{ContinuousFileProcessingMigrationTest}}
 * {{WindowOperatorMigrationTest}}
 * {{StatefulJobSavepointMigrationITCase}}
 * {{StatefulJobWBroadcastStateMigrationITCase}}


> Update migration tests in master to cover migration from release-1.11
> -
>
> Key: FLINK-18552
> URL: https://issues.apache.org/jira/browse/FLINK-18552
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zhijiang
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> We should update the following tests to cover migration from release-1.11:
>  * {{CEPMigrationTest}}
>  * {{BucketingSinkMigrationTest}}
>  * {{FlinkKafkaConsumerBaseMigrationTest}}
>  * {{ContinuousFileProcessingMigrationTest}}
>  * {{WindowOperatorMigrationTest}}
>  * {{StatefulJobSavepointMigrationITCase.scala}}
>  * {{StatefulJobWBroadcastStateMigrationITCase.scala}}
>  
> Refers to https://issues.apache.org/jira/browse/FLINK-13613, there are more 
> migration tests requires to update:
>  
> * FlinkKafkaProducer011MigrationTest
> * FlinkKafkaProducerMigrationOperatorTest
> * FlinkKafkaProducerMigrationTest
> * StatefulJobSavepointMigrationITCase
> * StatefulJobWBroadcastStateMigrationITCase
> * TypeSerializerSnapshotMigrationITCase
> * AbstractKeyedOperatorRestoreTestBase
> * AbstractNonKeyedOperatorRestoreTestBase
> * FlinkKinesisConsumerMigrationTest 
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-18552) Update migration tests in master to cover migration from release-1.11

2020-07-22 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao reopened FLINK-18552:
-

> Update migration tests in master to cover migration from release-1.11
> -
>
> Key: FLINK-18552
> URL: https://issues.apache.org/jira/browse/FLINK-18552
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zhijiang
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> We should update the following tests to cover migration from release-1.11:
>  * {{CEPMigrationTest}}
>  * {{BucketingSinkMigrationTest}}
>  * {{FlinkKafkaConsumerBaseMigrationTest}}
>  * {{ContinuousFileProcessingMigrationTest}}
>  * {{WindowOperatorMigrationTest}}
>  * {{StatefulJobSavepointMigrationITCase}}
>  * {{StatefulJobWBroadcastStateMigrationITCase}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] gaoyunhaii opened a new pull request #12965: [FLINK-18552][tests] Update migration tests in master to cover migration till release-1.11

2020-07-22 Thread GitBox


gaoyunhaii opened a new pull request #12965:
URL: https://github.com/apache/flink/pull/12965


   **This PR picks the #12876 to release-1.11**
   
   ## What is the purpose of the change
   
   This PR updates the following tests to cover migration till release-1.11 
(including release-1.10 and release-1.11):
   
   - `CEPMigrationTest`
   - `BucketingSinkMigrationTest`
   - `FlinkKafkaConsumerBaseMigrationTest`
   - `ContinuousFileProcessingMigrationTest`
   - `WindowOperatorMigrationTest`
   - `StatefulJobSavepointMigrationITCase.scala`
   - `StatefulJobWBroadcastStateMigrationITCase.scala`
   
   
   ## Brief change log
   
 - c768117e95a5806c0caafcb567ddee9d6cd1ffa3 fixes for `CEPMigrationTest`
 - aaf50401feaca88f7e992105369d025b07695937 fixes for 
`BucketingSinkMigrationTest`
 - 7d2b7cf74574a50f098915fb34daa863fd97ee3c fixes for 
`FlinkKafkaConsumerBaseMigrationTest`
 - a1f07bd541f551d55a627ea75efcd6106cb63c17 fixes for 
`ContinuousFileProcessingMigrationTest`
 - 2be0da4cef2ddc556bcbbce5657abb375f672162 fixes for 
`WindowOperatorMigrationTest`
 - 96e67ee5b0e8c1341bfe7354bdc50702a96f3c65 fixes for 
`StatefulJobSavepointMigrationITCase`
 - f9457359a291e5df59f3448cda5f1816e7288c7f fixes for 
`StatefulJobWBroadcastStateMigrationITCase`
   
   For each test, we 
 - Creates the corresponding savepoint/checkpoint files by running the 
corresponding `write*snapshots()` methods with the **corresponding branch**.
 - Adds 1.10 and 1.11 versions to the migration test version list.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-10515) Improve tokenisation of program args passed as string

2020-07-22 Thread Andrey Zagrebin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163245#comment-17163245
 ] 

Andrey Zagrebin commented on FLINK-10515:
-

I think the problem is that Web UI has one field for space-separated arguments. 
It is historical and usually more natural for users. Therefore, we still need 
the tokenisation somewhere. Then we might want to keep it on Java server side 
if not in Web UI.

Alternatively, we could also somehow add a switch to the list of arguments in 
Web UI as well. E.g. multiline textbox for newline-separated arguments or 
adding textbox for each argument.

> Improve tokenisation of program args passed as string
> -
>
> Key: FLINK-10515
> URL: https://issues.apache.org/jira/browse/FLINK-10515
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.7.0
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Major
>
> At the moment tokenisation of program args does not respect escape 
> characters. It can be improved to support at least program args separated by 
> spaces with unix style escaping.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15649) Support mounting volumes

2020-07-22 Thread Canbin Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163240#comment-17163240
 ] 

Canbin Zheng commented on FLINK-15649:
--

[~DaDaShen] Previously I don't think volume mounting is a basic functionality, 
and since pod template can mount volume so I do not push this feature forward. 
Recently we found writing to overlay FileSystem could seriously affect 
performance, so maybe you are right that volume mounting is necessary for users 
to improve overall performance.

BTW, pls don't submit PRs before reaching consensus and committers assigning 
that issue to you.

> Support mounting volumes 
> -
>
> Key: FLINK-15649
> URL: https://issues.apache.org/jira/browse/FLINK-15649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
>
> Add support for mounting K8S volumes, including emptydir, hostpath, pv etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-18559) Refactor tests for datastream / table collect

2020-07-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng resolved FLINK-18559.
-
Resolution: Done

> Refactor tests for datastream / table collect
> -
>
> Key: FLINK-18559
> URL: https://issues.apache.org/jira/browse/FLINK-18559
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Table SQL / API
>Reporter: Caizhi Weng
>Priority: Major
> Fix For: 1.12.0
>
>
> Tests for datastream / table collect is a bit messy. The testing methods are 
> too long and it is hard to add new tests. We should refactor them before 
> adding new collecting iterators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-18560) Introduce collect iterator with at least once semantics and exactly once semantics without fault tolerance

2020-07-22 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng resolved FLINK-18560.
-
Resolution: Implemented

> Introduce collect iterator with at least once semantics and exactly once 
> semantics without fault tolerance
> --
>
> Key: FLINK-18560
> URL: https://issues.apache.org/jira/browse/FLINK-18560
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Table SQL / API
>Reporter: Caizhi Weng
>Priority: Major
> Fix For: 1.12.0
>
>
> See FLINK-18558 for more information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12955: [FLINK-18632][table-planner-blink] Assign row kind from input to outp…

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12955:
URL: https://github.com/apache/flink/pull/12955#issuecomment-662294282


   
   ## CI report:
   
   * cb1a5e6814cfe93cc0f075d5b11859dbe7d18c11 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4714)
 
   * 72d503c5afb9de060729cb9eab7cc9da2c4f2da1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4753)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18641) "Failure to finalize checkpoint" error in MasterTriggerRestoreHook

2020-07-22 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163239#comment-17163239
 ] 

Jiangjie Qin commented on FLINK-18641:
--

[~pnowojski] In Flink 1.10, the checkpoint thread blocks on the future returned 
by the master hook, therefore that thread will not handle the acks from the 
tasks until the master hook has completed.

[https://github.com/apache/flink/blob/bfe6c2eddedaf3b8067c973ba82a06b36d5095f9/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L627]

Now in Flink 1.11, the checkpoint thread no longer blocks on that future 
anymore, it uses handleAsync instead.

[https://github.com/apache/flink/blob/791e276c8346a49130cb096bafa128d7f1231236/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L708]

[https://github.com/apache/flink/blob/791e276c8346a49130cb096bafa128d7f1231236/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L544]

So that is what I meant by
{quote}In Flink 1.10, the checkpoint thread blocks on the future returned by 
the master hook. This becomes asynchronous in Flink 1.11.
{quote}
 

> "Failure to finalize checkpoint" error in MasterTriggerRestoreHook
> --
>
> Key: FLINK-18641
> URL: https://issues.apache.org/jira/browse/FLINK-18641
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Brian Zhou
>Priority: Major
>
> https://github.com/pravega/flink-connectors is a Pravega connector for Flink. 
> The ReaderCheckpointHook[1] class uses the Flink `MasterTriggerRestoreHook` 
> interface to trigger the Pravega checkpoint during Flink checkpoints to make 
> sure the data recovery. The checkpoint recovery tests are running fine in 
> Flink 1.10, but it has below issues in Flink 1.11 causing the tests time out. 
> Suspect it is related to the checkpoint coordinator thread model changes in 
> Flink 1.11
> Error stacktrace:
> {code}
> 2020-07-09 15:39:39,999 30945 [jobmanager-future-thread-5] WARN  
> o.a.f.runtime.jobmaster.JobMaster - Error while processing checkpoint 
> acknowledgement message
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not finalize 
> the pending checkpoint 3. Failure reason: Failure to finalize checkpoint.
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1033)
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:948)
>  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$acknowledgeCheckpoint$4(SchedulerBase.java:802)
>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.SerializedThrowable: Pending checkpoint has 
> not been fully acknowledged yet
>  at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:195)
>  at 
> org.apache.flink.runtime.checkpoint.PendingCheckpoint.finalizeCheckpoint(PendingCheckpoint.java:298)
>  at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1021)
>  ... 9 common frames omitted
> {code}
> More detail in this mailing thread: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Pravega-connector-cannot-recover-from-the-checkpoint-due-to-quot-Failure-to-finalize-checkpoint-quot-td36652.html
> Also in https://github.com/pravega/flink-connectors/issues/387



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18621) Simplify the methods of Executor interface in sql client

2020-07-22 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18621.

Resolution: Fixed

master: c895737c43814a501888e2e3929f0123edc2af70

> Simplify the methods of Executor interface in sql client
> 
>
> Key: FLINK-18621
> URL: https://issues.apache.org/jira/browse/FLINK-18621
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> After {{TableEnvironment#executeSql}} is introduced, many methods in 
> {{Executor}} interface can be replaced with {{TableEnvironment#executeSql}}. 
> Those methods include:
> listCatalogs, listDatabases, createTable, dropTable, listTables, 
> listFunctions, useCatalog, useDatabase, getTableSchema (use {{DESCRIBE xx}})



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12923: [FLINK-18621][sql-client] Simplify the methods of Executor interface in sql client

2020-07-22 Thread GitBox


JingsongLi merged pull request #12923:
URL: https://github.com/apache/flink/pull/12923


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18637) Key group is not in KeyGroupRange

2020-07-22 Thread Ori Popowski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163232#comment-17163232
 ] 

Ori Popowski commented on FLINK-18637:
--

Great, thanks

> Key group is not in KeyGroupRange
> -
>
> Key: FLINK-18637
> URL: https://issues.apache.org/jira/browse/FLINK-18637
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
> Environment: Version: 1.10.0, Rev:, Date:
> OS current user: yarn
>  Current Hadoop/Kerberos user: hadoop
>  JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.141-b15
>  Maximum heap size: 28960 MiBytes
>  JAVA_HOME: /usr/java/jdk1.8.0_141/jre
>  Hadoop version: 2.8.5-amzn-6
>  JVM Options:
>  -Xmx30360049728
>  -Xms30360049728
>  -XX:MaxDirectMemorySize=4429185024
>  -XX:MaxMetaspaceSize=1073741824
>  -XX:+UseG1GC
>  -XX:+UnlockDiagnosticVMOptions
>  -XX:+G1SummarizeConcMark
>  -verbose:gc
>  -XX:+PrintGCDetails
>  -XX:+PrintGCDateStamps
>  -XX:+UnlockCommercialFeatures
>  -XX:+FlightRecorder
>  -XX:+DebugNonSafepoints
>  
> -XX:FlightRecorderOptions=defaultrecording=true,settings=/home/hadoop/heap.jfc,dumponexit=true,dumponexitpath=/var/lib/hadoop-yarn/recording.jfr,loglevel=info
>  
> -Dlog.file=/var/log/hadoop-yarn/containers/application_1593935560662_0002/container_1593935560662_0002_01_02/taskmanager.log
>  -Dlog4j.configuration=[file:./log4j.properties|file:///log4j.properties]
>  Program Arguments:
>  -Dtaskmanager.memory.framework.off-heap.size=134217728b
>  -Dtaskmanager.memory.network.max=1073741824b
>  -Dtaskmanager.memory.network.min=1073741824b
>  -Dtaskmanager.memory.framework.heap.size=134217728b
>  -Dtaskmanager.memory.managed.size=23192823744b
>  -Dtaskmanager.cpu.cores=7.0
>  -Dtaskmanager.memory.task.heap.size=30225832000b
>  -Dtaskmanager.memory.task.off-heap.size=3221225472b
>  --configDir.
>  
> -Djobmanager.rpc.address=ip-10-180-30-250.us-west-2.compute.internal-Dweb.port=0
>  -Dweb.tmpdir=/tmp/flink-web-64f613cf-bf04-4a09-8c14-75c31b619574
>  -Djobmanager.rpc.port=33739
>  -Drest.address=ip-10-180-30-250.us-west-2.compute.internal
>Reporter: Ori Popowski
>Priority: Major
>
> I'm getting this error when creating a savepoint. I've read in 
> https://issues.apache.org/jira/browse/FLINK-16193 that it's caused by 
> unstable hashcode or equals on the key, or improper use of 
> {{reinterpretAsKeyedStream}}.
>   
>  My key is a string and I don't use {{reinterpretAsKeyedStream}}.
>  
> {code:java}
> senv
>   .addSource(source)
>   .flatMap(…)
>   .filterWith { case (metadata, _, _) => … }
>   .assignTimestampsAndWatermarks(new 
> BoundedOutOfOrdernessTimestampExtractor(…))
>   .keyingBy { case (meta, _) => meta.toPathString }
>   .process(new TruncateLargeSessions(config.sessionSizeLimit))
>   .keyingBy { case (meta, _) => meta.toPathString }
>   .window(EventTimeSessionWindows.withGap(Time.of(…)))
>   .process(new ProcessSession(sessionPlayback, config))
>   .addSink(sink){code}
>  
> {code:java}
> org.apache.flink.util.FlinkException: Triggering a savepoint for the job 
> 962fc8e984e7ca1ed65a038aa62ce124 failed.
>   at 
> org.apache.flink.client.cli.CliFrontend.triggerSavepoint(CliFrontend.java:633)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$savepoint$9(CliFrontend.java:611)
>   at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:843)
>   at 
> org.apache.flink.client.cli.CliFrontend.savepoint(CliFrontend.java:608)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:910)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: The job has failed.
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$triggerSavepoint$3(SchedulerBase.java:744)
>   at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
>   at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
>   at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcA

[jira] [Closed] (FLINK-18558) Introduce collect iterator with at least once semantics and exactly once semantics without fault tolerance

2020-07-22 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18558.

  Assignee: Caizhi Weng
Resolution: Fixed

master: 791e276c8346a49130cb096bafa128d7f1231236

> Introduce collect iterator with at least once semantics and exactly once 
> semantics without fault tolerance
> --
>
> Key: FLINK-18558
> URL: https://issues.apache.org/jira/browse/FLINK-18558
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Table SQL / API
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently {{TableResult#collect}} and {{DataStreamUtils#collect}} can only 
> produce results if users explicitly enable checkpoint for infinite streaming 
> jobs. It would be strange to require the users to do so if they just want to 
> take a look at their data.
> We should introduce collect iterator with at least once semantics and exactly 
> once semantics without fault tolerance. When calling the {{collect}} method, 
> we should automatically pick an iterator for the user:
> * If the user does not explicitly enable a checkpoint, we use exactly once 
> iterator without fault tolerance. That is to say, the iterator will throw 
> exception once the job restarts.
> * If the user explicitly enables an exactly once checkpoint, we use the 
> current implementation of collect iterator.
> * If the user explicitly enables an at least once checkpoint, we use the at 
> least once iterator. That is to say, the iterator ignores both checkpoint 
> information and job restarts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12867: [FLINK-18558][streaming] Introduce collect iterator with at least once semantics and exactly once semantics without fault tolerance

2020-07-22 Thread GitBox


JingsongLi merged pull request #12867:
URL: https://github.com/apache/flink/pull/12867


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12955: [FLINK-18632][table-planner-blink] Assign row kind from input to outp…

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12955:
URL: https://github.com/apache/flink/pull/12955#issuecomment-662294282


   
   ## CI report:
   
   * cb1a5e6814cfe93cc0f075d5b11859dbe7d18c11 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4714)
 
   * 72d503c5afb9de060729cb9eab7cc9da2c4f2da1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18665) Filesystem connector should use TableSchema exclude computed columns

2020-07-22 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163229#comment-17163229
 ] 

Congxian Qiu(klion26) commented on FLINK-18665:
---

[~Leonard Xu] thanks for the update :)

> Filesystem connector should use TableSchema exclude computed columns
> 
>
> Key: FLINK-18665
> URL: https://issues.apache.org/jira/browse/FLINK-18665
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.2
>
>
> This is reported in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-use-a-nested-column-for-CREATE-TABLE-PARTITIONED-BY-td36796.html
> {code}
> create table navi (
>   a STRING,
>   location ROW
> ) with (
>   'connector' = 'filesystem',
>   'path' = 'east-out',
>   'format' = 'json'
> )
> CREATE TABLE output (
>   `partition` AS location.transId
> ) PARTITIONED BY (`partition`)
> WITH (
>   'connector' = 'filesystem',
>   'path' = 'east-out',
>   'format' = 'json'
> ) LIKE navi (EXCLUDING ALL)
> tEnv.sqlQuery("SELECT type, location FROM navi").executeInsert("output")
> {code}
> It throws the following exception 
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> The field count of logical schema of the table does not match with the field 
> count of physical schema
> . The logical schema: [STRING,ROW<`lastUpdateTime` BIGINT, `transId` STRING>]
> The physical schema: [STRING,ROW<`lastUpdateTime` BIGINT, `transId` 
> STRING>,STRING].
> {code}
> The reason is that {{FileSystemTableFactory#createTableSource}} should use 
> schema excluded computed column, not the original catalog table schema.
> [1]: 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/FileSystemTableFactory.java#L78



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18679) It doesn't support to join two tables containing the same field names in Table API

2020-07-22 Thread Dian Fu (Jira)
Dian Fu created FLINK-18679:
---

 Summary: It doesn't support to join two tables containing the same 
field names in Table API
 Key: FLINK-18679
 URL: https://issues.apache.org/jira/browse/FLINK-18679
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: Dian Fu


For example, if we have two tables which have the same field named "int1" and 
want to join the two tables on this field:
{code:java}
val t1 = util.addTable[(Int, Long, String)]('int1, 'long1, 'string1)
val t2 = util.addTable[(Int, Long, String)]('int1, 'long2, 'string2)
{code}
In SQL, we could do it as following:
{code:java}
SELECT xxx
FROM t1 JOIN t2 ON t1.int1 = t2.int1
{code}
However, this is not possible in Table API. It lacks a way to specify a column 
from one table. We have to rename the field name from one table to make sure 
that all the field names are unique before joining them. This is very 
inconvenient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18679) It doesn't support to join two tables containing the same field names in Table API

2020-07-22 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18679:

Description: 
For example, if we have two tables which have the same field name "int1" and 
want to join the two tables on this field:
{code:java}
val t1 = util.addTable[(Int, Long, String)]('int1, 'long1, 'string1)
val t2 = util.addTable[(Int, Long, String)]('int1, 'long2, 'string2)
{code}
In SQL, we could do it as following:
{code:java}
SELECT xxx
FROM t1 JOIN t2 ON t1.int1 = t2.int1
{code}
However, this is not possible in Table API. It lacks a way to specify a column 
from one table. We have to rename the field name from one table to make sure 
that all the field names are unique before joining them. This is very 
inconvenient.

  was:
For example, if we have two tables which have the same field named "int1" and 
want to join the two tables on this field:
{code:java}
val t1 = util.addTable[(Int, Long, String)]('int1, 'long1, 'string1)
val t2 = util.addTable[(Int, Long, String)]('int1, 'long2, 'string2)
{code}
In SQL, we could do it as following:
{code:java}
SELECT xxx
FROM t1 JOIN t2 ON t1.int1 = t2.int1
{code}
However, this is not possible in Table API. It lacks a way to specify a column 
from one table. We have to rename the field name from one table to make sure 
that all the field names are unique before joining them. This is very 
inconvenient.


> It doesn't support to join two tables containing the same field names in 
> Table API
> --
>
> Key: FLINK-18679
> URL: https://issues.apache.org/jira/browse/FLINK-18679
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dian Fu
>Priority: Major
>
> For example, if we have two tables which have the same field name "int1" and 
> want to join the two tables on this field:
> {code:java}
> val t1 = util.addTable[(Int, Long, String)]('int1, 'long1, 'string1)
> val t2 = util.addTable[(Int, Long, String)]('int1, 'long2, 'string2)
> {code}
> In SQL, we could do it as following:
> {code:java}
> SELECT xxx
> FROM t1 JOIN t2 ON t1.int1 = t2.int1
> {code}
> However, this is not possible in Table API. It lacks a way to specify a 
> column from one table. We have to rename the field name from one table to 
> make sure that all the field names are unique before joining them. This is 
> very inconvenient.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on a change in pull request #12952: [Hotfix] fix file event_driven.zh.md did't adding title anchor.

2020-07-22 Thread GitBox


klion26 commented on a change in pull request #12952:
URL: https://github.com/apache/flink/pull/12952#discussion_r459216884



##
File path: docs/learn-flink/event_driven.zh.md
##
@@ -120,7 +120,7 @@ public static class PseudoWindow extends
   除此之外,`processElement` 和 `onTimer` 都提供了一个上下文对象,该对象可用于与 `TimerService` 交互。
   这两个回调还传递了一个可用于发出结果的 `Collector`。
 
-
+

Review comment:
   这里建议改成 `the-open-method` 
,[英文中的链接](http://localhost:4000/learn-flink/event_driven.html#the-open-method)也是这样,这里建议保持和英文的一致

##
File path: docs/learn-flink/event_driven.zh.md
##
@@ -183,7 +183,7 @@ public void processElement(
 * 本例使用一个 `MapState`,其中 keys 是时间戳(timestamp),并为同一时间戳设置一个 Timer。
   这是一种常见的模式;它使得在 Timer 触发时查找相关信息变得简单高效。
 
-
+

Review comment:
   这里建议改成 `the-ontimer-method` 理由同上

##
File path: docs/learn-flink/event_driven.zh.md
##
@@ -183,7 +183,7 @@ public void processElement(
 * 本例使用一个 `MapState`,其中 keys 是时间戳(timestamp),并为同一时间戳设置一个 Timer。
   这是一种常见的模式;它使得在 Timer 触发时查找相关信息变得简单高效。
 
-
+
  `onTimer()` 方法

Review comment:
   另外帮忙把 239 行的 `name="Example"` 改成 `name=example-1` 这样 
1)可以和英文版的同步;2)避免同一个页面中有两个 `Example` 的标签

##
File path: docs/learn-flink/event_driven.zh.md
##
@@ -142,7 +142,7 @@ public void open(Configuration conf) {
 实际上,如果 Watermark 延迟比窗口长度长得多,则可能有多个窗口同时打开,而不仅仅是两个。
 此实现通过使用 `MapState` 来支持处理这一点,该 `MapState` 将每个窗口的结束时间戳映射到该窗口的小费总和。
 
-
+

Review comment:
   这里建议改成 `the-processelement-method` 理由同上





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12958: [FLINK-18625] [ Runtime / Coordination] Maintain redundant taskmanagers to speed up failover

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12958:
URL: https://github.com/apache/flink/pull/12958#issuecomment-662378507


   
   ## CI report:
   
   * ca4d714f273259e0c1eb42238d455d331ad93996 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4747)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lzy3261944 commented on pull request #12955: [FLINK-18632][table-planner-blink] Assign row kind from input to outp…

2020-07-22 Thread GitBox


lzy3261944 commented on pull request #12955:
URL: https://github.com/apache/flink/pull/12955#issuecomment-662820303


   > I think you can add tests in `StreamTableEnvironmentITCase` and reuse the 
POJO classes there.
   
   OK. I have already add a test case.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] TsReaper commented on pull request #12867: [FLINK-18558][streaming] Introduce collect iterator with at least once semantics and exactly once semantics without fault tolerance

2020-07-22 Thread GitBox


TsReaper commented on pull request #12867:
URL: https://github.com/apache/flink/pull/12867#issuecomment-662810921


   Azure passed in 
https://dev.azure.com/tsreaper96/Flink/_build/results?buildId=51



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18678) Hive connector fails to create vector orc reader if user specifies incorrect hive version

2020-07-22 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163199#comment-17163199
 ] 

Rui Li commented on FLINK-18678:


I think we should investigate whether some orc shim methods can be consolidated 
in order to be more tolerant. If that's not possible, we should at least update 
our doc to clarify when/how user should set hive version.

> Hive connector fails to create vector orc reader if user specifies incorrect 
> hive version
> -
>
> Key: FLINK-18678
> URL: https://issues.apache.org/jira/browse/FLINK-18678
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Rui Li
>Priority: Major
>
> Issue reported by user. User's Hive deployment is 2.1.1 and uses 
> {{flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar}} in Flink lib. If user 
> specifies Hive version as 2.1.1, then creating vectorized orc reader fails 
> with exception:
> {noformat}
> java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.ReaderImpl 
> cannot be cast to org.apache.orc.Reader
>   at org.apache.flink.orc.shim.OrcShimV200.createReader(OrcShimV200.java:63) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:98) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:54)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:91)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
> ..
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18678) Hive connector fails to create vector orc reader if user specifies incorrect hive version

2020-07-22 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-18678:
---
Affects Version/s: 1.11.1

> Hive connector fails to create vector orc reader if user specifies incorrect 
> hive version
> -
>
> Key: FLINK-18678
> URL: https://issues.apache.org/jira/browse/FLINK-18678
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: 1.11.1
>Reporter: Rui Li
>Priority: Major
>
> Issue reported by user. User's Hive deployment is 2.1.1 and uses 
> {{flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar}} in Flink lib. If user 
> specifies Hive version as 2.1.1, then creating vectorized orc reader fails 
> with exception:
> {noformat}
> java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.ReaderImpl 
> cannot be cast to org.apache.orc.Reader
>   at org.apache.flink.orc.shim.OrcShimV200.createReader(OrcShimV200.java:63) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:98) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:54)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:91)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
> ..
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18678) Hive connector fails to create vector orc reader if user specifies incorrect hive version

2020-07-22 Thread Rui Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated FLINK-18678:
---
Description: 
Issue reported by user. User's Hive deployment is 2.1.1 and uses 
{{flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar}} in Flink lib. If user 
specifies Hive version as 2.1.1, then creating vectorized orc reader fails with 
exception:
{noformat}
java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.ReaderImpl 
cannot be cast to org.apache.orc.Reader
  at org.apache.flink.orc.shim.OrcShimV200.createReader(OrcShimV200.java:63) 
~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
  at 
org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:98) 
~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
  at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73) 
~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
  at 
org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:54)
 ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
  at 
org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:91)
 ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
..
{noformat}

> Hive connector fails to create vector orc reader if user specifies incorrect 
> hive version
> -
>
> Key: FLINK-18678
> URL: https://issues.apache.org/jira/browse/FLINK-18678
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Reporter: Rui Li
>Priority: Major
>
> Issue reported by user. User's Hive deployment is 2.1.1 and uses 
> {{flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar}} in Flink lib. If user 
> specifies Hive version as 2.1.1, then creating vectorized orc reader fails 
> with exception:
> {noformat}
> java.lang.ClassCastException: org.apache.hadoop.hive.ql.io.orc.ReaderImpl 
> cannot be cast to org.apache.orc.Reader
>   at org.apache.flink.orc.shim.OrcShimV200.createReader(OrcShimV200.java:63) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.shim.OrcShimV200.createRecordReader(OrcShimV200.java:98) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at org.apache.flink.orc.OrcSplitReader.(OrcSplitReader.java:73) 
> ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.OrcColumnarRowSplitReader.(OrcColumnarRowSplitReader.java:54)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
>   at 
> org.apache.flink.orc.OrcSplitReaderUtil.genPartColumnarRowReader(OrcSplitReaderUtil.java:91)
>  ~[flink-sql-connector-hive-2.2.0_2.11-1.11.0.jar:1.11.0]
> ..
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18678) Hive connector fails to create vector orc reader if user specifies incorrect hive version

2020-07-22 Thread Rui Li (Jira)
Rui Li created FLINK-18678:
--

 Summary: Hive connector fails to create vector orc reader if user 
specifies incorrect hive version
 Key: FLINK-18678
 URL: https://issues.apache.org/jira/browse/FLINK-18678
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive, Formats (JSON, Avro, Parquet, ORC, 
SequenceFile)
Reporter: Rui Li






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12957: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12957:
URL: https://github.com/apache/flink/pull/12957#issuecomment-662341125


   
   ## CI report:
   
   * 491363e1ed8d85b680cff74a8412fe4e878342b8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4746)
 
   * 6142f8bf5f9e4e146da83e7bea9977359fe7f60f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4752)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12957: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12957:
URL: https://github.com/apache/flink/pull/12957#issuecomment-662341125


   
   ## CI report:
   
   * 491363e1ed8d85b680cff74a8412fe4e878342b8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4746)
 
   * 6142f8bf5f9e4e146da83e7bea9977359fe7f60f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12964: [FLINK-17426][blink plannger] Dynamic Source supportsLimit pushdown

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12964:
URL: https://github.com/apache/flink/pull/12964#issuecomment-662793989


   
   ## CI report:
   
   * 984b8a182e5a5cc64a9e1957710e9b9fd4768a56 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4751)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12867: [FLINK-18558][streaming] Introduce collect iterator with at least once semantics and exactly once semantics without fault tolerance

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12867:
URL: https://github.com/apache/flink/pull/12867#issuecomment-656590407


   
   ## CI report:
   
   * aa18ae06c64ff88beba4bfe780f7de03d0dc77f1 UNKNOWN
   * a1ea81a9dd67cab4c8b483dfe3eed51d7ab4a9e2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4748)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18262) PyFlink end-to-end test stalls

2020-07-22 Thread Wei Zhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163187#comment-17163187
 ] 

Wei Zhong commented on FLINK-18262:
---

Hi [~chesnay], [~rmetzger], thanks for your reply. It may be too arbitrary to 
say that this is a "tar" command bug, but the script does hang at the tar 
command.

I did add "set -x" and the verbose option of the "tar" command to the script, 
see 
[https://github.com/WeiZhong94/flink/commit/8c6c25e0b833a1d13db28aa2ef28b92e476ae801]
 and 
[https://github.com/WeiZhong94/flink/commit/95bbfa2112881a0cc2b39639f36803e5631137a7],
 and found that the script hangs when extracting the flink tar ball.

After rethinking about this issue I agree with you that it maybe caused by 
other environmental issues such as too large files. But unfortunately the links 
in my last comments seem broken and the newly pushed builds of those old 
commits I mentioned above seem always failing now.

I have rebased those commits to the latest master branch and try to reproduce 
this issue, but it seems our e2e tests are much more stable than one month ago 
(only 2 fails in 41 builds in my own Azure pipelines, but not caused by this 
issue).  I'll continue to pay attention to this issue and follow up in time 
when it appears next time.

> PyFlink end-to-end test stalls
> --
>
> Key: FLINK-18262
> URL: https://issues.apache.org/jira/browse/FLINK-18262
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3299&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}
> 2020-06-11T17:25:37.2919869Z We now have 2 NodeManagers up.
> 2020-06-11T18:22:11.3398295Z ##[error]The operation was canceled.
> 2020-06-11T18:22:11.3411819Z ##[section]Finishing: Run e2e tests
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18637) Key group is not in KeyGroupRange

2020-07-22 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163186#comment-17163186
 ] 

Yun Tang commented on FLINK-18637:
--

You could use another field to get the value from value state in 
#processElement function, and metrics only gauge the value from the field you 
provide. By doing this, there would no concurrent access in another thread.

> Key group is not in KeyGroupRange
> -
>
> Key: FLINK-18637
> URL: https://issues.apache.org/jira/browse/FLINK-18637
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
> Environment: Version: 1.10.0, Rev:, Date:
> OS current user: yarn
>  Current Hadoop/Kerberos user: hadoop
>  JVM: Java HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.141-b15
>  Maximum heap size: 28960 MiBytes
>  JAVA_HOME: /usr/java/jdk1.8.0_141/jre
>  Hadoop version: 2.8.5-amzn-6
>  JVM Options:
>  -Xmx30360049728
>  -Xms30360049728
>  -XX:MaxDirectMemorySize=4429185024
>  -XX:MaxMetaspaceSize=1073741824
>  -XX:+UseG1GC
>  -XX:+UnlockDiagnosticVMOptions
>  -XX:+G1SummarizeConcMark
>  -verbose:gc
>  -XX:+PrintGCDetails
>  -XX:+PrintGCDateStamps
>  -XX:+UnlockCommercialFeatures
>  -XX:+FlightRecorder
>  -XX:+DebugNonSafepoints
>  
> -XX:FlightRecorderOptions=defaultrecording=true,settings=/home/hadoop/heap.jfc,dumponexit=true,dumponexitpath=/var/lib/hadoop-yarn/recording.jfr,loglevel=info
>  
> -Dlog.file=/var/log/hadoop-yarn/containers/application_1593935560662_0002/container_1593935560662_0002_01_02/taskmanager.log
>  -Dlog4j.configuration=[file:./log4j.properties|file:///log4j.properties]
>  Program Arguments:
>  -Dtaskmanager.memory.framework.off-heap.size=134217728b
>  -Dtaskmanager.memory.network.max=1073741824b
>  -Dtaskmanager.memory.network.min=1073741824b
>  -Dtaskmanager.memory.framework.heap.size=134217728b
>  -Dtaskmanager.memory.managed.size=23192823744b
>  -Dtaskmanager.cpu.cores=7.0
>  -Dtaskmanager.memory.task.heap.size=30225832000b
>  -Dtaskmanager.memory.task.off-heap.size=3221225472b
>  --configDir.
>  
> -Djobmanager.rpc.address=ip-10-180-30-250.us-west-2.compute.internal-Dweb.port=0
>  -Dweb.tmpdir=/tmp/flink-web-64f613cf-bf04-4a09-8c14-75c31b619574
>  -Djobmanager.rpc.port=33739
>  -Drest.address=ip-10-180-30-250.us-west-2.compute.internal
>Reporter: Ori Popowski
>Priority: Major
>
> I'm getting this error when creating a savepoint. I've read in 
> https://issues.apache.org/jira/browse/FLINK-16193 that it's caused by 
> unstable hashcode or equals on the key, or improper use of 
> {{reinterpretAsKeyedStream}}.
>   
>  My key is a string and I don't use {{reinterpretAsKeyedStream}}.
>  
> {code:java}
> senv
>   .addSource(source)
>   .flatMap(…)
>   .filterWith { case (metadata, _, _) => … }
>   .assignTimestampsAndWatermarks(new 
> BoundedOutOfOrdernessTimestampExtractor(…))
>   .keyingBy { case (meta, _) => meta.toPathString }
>   .process(new TruncateLargeSessions(config.sessionSizeLimit))
>   .keyingBy { case (meta, _) => meta.toPathString }
>   .window(EventTimeSessionWindows.withGap(Time.of(…)))
>   .process(new ProcessSession(sessionPlayback, config))
>   .addSink(sink){code}
>  
> {code:java}
> org.apache.flink.util.FlinkException: Triggering a savepoint for the job 
> 962fc8e984e7ca1ed65a038aa62ce124 failed.
>   at 
> org.apache.flink.client.cli.CliFrontend.triggerSavepoint(CliFrontend.java:633)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$savepoint$9(CliFrontend.java:611)
>   at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:843)
>   at 
> org.apache.flink.client.cli.CliFrontend.savepoint(CliFrontend.java:608)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:910)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
> Caused by: java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: The job has failed.
>   at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$triggerSavepoint$3(SchedulerBase.java:744)
>   at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
>   at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
>   at 
> java.util.concurrent.CompletableFuture

[GitHub] [flink] flinkbot edited a comment on pull request #12957: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12957:
URL: https://github.com/apache/flink/pull/12957#issuecomment-662341125


   
   ## CI report:
   
   * 491363e1ed8d85b680cff74a8412fe4e878342b8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4746)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18335) NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted time outs

2020-07-22 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163183#comment-17163183
 ] 

Yun Tang commented on FLINK-18335:
--

[~rmetzger] I cannot produce this locally and I also try to trigger CI of tests 
more than 40+ times ( 
https://dev.azure.com/myasuka/flink/_build/results?buildId=183&view=results and 
more), however, still cannot reproduce it.

Since notify checkpoint abort message is sent asynchronously, there existed the 
possibility for time out. I think we could create another PR to increase the 
time out and add necessary logs so that we could know what's wrong if meet this 
error next time, what do you think?

>  NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted time outs
> 
>
> Key: FLINK-18335
> URL: https://issues.apache.org/jira/browse/FLINK-18335
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Yun Tang
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3582&view=logs&j=5c8e7682-d68f-54d1-16a2-a09310218a49&t=f508e270-48d6-5f1e-3138-42a17e0714f0
> {noformat}
> [ERROR] Errors: 
> [ERROR]   
> NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted:182->verifyAllOperatorsNotifyAborted:195->Object.wait:502->Object.wait:-2
>  » TestTimedOut
> {noformat}
> CC [~yunta]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12964: [FLINK-17426][blink plannger] Dynamic Source supportsLimit pushdown

2020-07-22 Thread GitBox


flinkbot commented on pull request #12964:
URL: https://github.com/apache/flink/pull/12964#issuecomment-662793989


   
   ## CI report:
   
   * 984b8a182e5a5cc64a9e1957710e9b9fd4768a56 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12867: [FLINK-18558][streaming] Introduce collect iterator with at least once semantics and exactly once semantics without fault tolerance

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12867:
URL: https://github.com/apache/flink/pull/12867#issuecomment-656590407


   
   ## CI report:
   
   * aa18ae06c64ff88beba4bfe780f7de03d0dc77f1 UNKNOWN
   * fb16ba0168cae5742f27c87acae3c477dd42150b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4717)
 
   * a1ea81a9dd67cab4c8b483dfe3eed51d7ab4a9e2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17260) StreamingKafkaITCase failure on Azure

2020-07-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163181#comment-17163181
 ] 

Dian Fu commented on FLINK-17260:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4743&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=7f606211-1454-543c-70ab-c7a028a1ce8c]
{code}
2020-07-22T23:58:46.0860584Z [ERROR] testKafka[0: 
kafka-version:0.10.2.2](org.apache.flink.tests.util.kafka.StreamingKafkaITCase) 
Time elapsed: 46.849 s <<< FAILURE! 2020-07-22T23:58:46.0861582Z 
java.lang.AssertionError: Messages from Kafka 0.10.2.2: [elephant,41,64213, 
giraffe,9,6, bee,31,65647, squirrel,86,66413] 
expected:<[elephant,27,64213]> but was:<[elephant,41,64213]>
{code}

> StreamingKafkaITCase failure on Azure
> -
>
> Key: FLINK-17260
> URL: https://issues.apache.org/jira/browse/FLINK-17260
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Roman Khachatryan
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> [https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/7544/logs/165]
>  
> {code:java}
> 2020-04-16T00:12:32.2848429Z [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-04-16T00:14:47.9100927Z [ERROR] Tests run: 3, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 135.621 s <<< FAILURE! - in 
> org.apache.flink.tests.util.k afka.StreamingKafkaITCase
> 2020-04-16T00:14:47.9103036Z [ERROR] testKafka[0: 
> kafka-version:0.10.2.0](org.apache.flink.tests.util.kafka.StreamingKafkaITCase)
>   Time elapsed: 46.222 s  <<<  FAILURE!
> 2020-04-16T00:14:47.9104033Z java.lang.AssertionError: 
> expected:<[elephant,27,64213]> but was:<[]>
> 2020-04-16T00:14:47.9104638Zat org.junit.Assert.fail(Assert.java:88)
> 2020-04-16T00:14:47.9105148Zat 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-04-16T00:14:47.9105701Zat 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-04-16T00:14:47.9106239Zat 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-04-16T00:14:47.9107177Zat 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase.testKafka(StreamingKafkaITCase.java:162)
> 2020-04-16T00:14:47.9107845Zat 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-04-16T00:14:47.9108434Zat 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-04-16T00:14:47.9109318Zat 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-04-16T00:14:47.9109914Zat 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-04-16T00:14:47.9110434Zat 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-04-16T00:14:47.9110985Zat 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-04-16T00:14:47.9111548Zat 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-04-16T00:14:47.9112083Zat 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-04-16T00:14:47.9112629Zat 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-04-16T00:14:47.9113145Zat 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-04-16T00:14:47.9113637Zat 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-04-16T00:14:47.9114072Zat 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-04-16T00:14:47.9114490Zat 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-04-16T00:14:47.9115256Zat 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-04-16T00:14:47.9115791Zat 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-04-16T00:14:47.9116292Zat 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-04-16T00:14:47.9116736Zat 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-04-16T00:14:47.9117779Zat 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-04-16T00:14:47.9118274Zat 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-04-16T00:14:47.9118766Zat 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-04-16T00:14:47.9119204Zat 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-04-16T00:14:47.9119625Zat 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-04-16T00:14:47.9120005Zat 
> org.junit.runners.Suite.ru

[GitHub] [flink] danny0405 commented on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-22 Thread GitBox


danny0405 commented on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-662792122


   > Would `flink-avro-confluent` be a better module name than 
`flink-avro-confluent-registry`? IMO `registry` has nothing to do with the 
format itself.
   
   It depends on how we understand it, the confluent avro format is mainly 
designed for schema registry.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-07-22 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18341:

Fix Version/s: 1.11.2

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0, 1.11.1
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.2
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-07-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163178#comment-17163178
 ] 

Dian Fu commented on FLINK-18341:
-

I reopen it for now. We can close it after fixing it in 1.11.

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0, 1.11.1
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-07-22 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18341:

Affects Version/s: 1.11.1

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0, 1.11.1
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-07-22 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reopened FLINK-18341:
-

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-07-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163176#comment-17163176
 ] 

Dian Fu edited comment on FLINK-18341 at 7/23/20, 2:37 AM:
---

Hi [~sjwiesman] , I take a quick look at this issue and think that the reason 
it fails on 1.11 is because you forgot to cherry-pick the following 
commit(there are two commits in the original 
[PR|https://github.com/apache/flink/pull/12592]) to 1.11 branch: 
https://github.com/apache/flink/commit/57267f277339f7ba3790c8e0110cdb77e9593073

Could you confirm that?


was (Author: dian.fu):
Hi [~sjwiesman] , I take a quick look at this issue and think that the reason 
it fails on 1.11 is because you forgot to cherry-pick the following 
commit(there are two commits in the original PR) to 1.11 branch: 
https://github.com/apache/flink/commit/57267f277339f7ba3790c8e0110cdb77e9593073

Could you confirm that?

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-07-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163176#comment-17163176
 ] 

Dian Fu commented on FLINK-18341:
-

Hi [~sjwiesman] , I take a quick look at this issue and think that the reason 
it fails on 1.11 is because you forgot to cherry-pick the following 
commit(there are two commits in the original PR) to 1.11 branch: 
https://github.com/apache/flink/commit/57267f277339f7ba3790c8e0110cdb77e9593073

Could you confirm that?

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-07-22 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163174#comment-17163174
 ] 

Dian Fu commented on FLINK-18341:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4744&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=3e8647c1-5a28-5917-dd93-bf78594ea994

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652&view=logs&j=08866332-78f7-59e4-4f7e-49a56faa3179&t=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] liuyongvs closed pull request #12860: [FLINK-17426][blink plannger] Dynamic Source supportsLimit pushdown

2020-07-22 Thread GitBox


liuyongvs closed pull request #12860:
URL: https://github.com/apache/flink/pull/12860


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12964: [FLINK-17426][blink plannger] Dynamic Source supportsLimit pushdown

2020-07-22 Thread GitBox


flinkbot commented on pull request #12964:
URL: https://github.com/apache/flink/pull/12964#issuecomment-662788621


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 984b8a182e5a5cc64a9e1957710e9b9fd4768a56 (Thu Jul 23 
02:22:57 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] liuyongvs opened a new pull request #12964: [FLINK-17426][blink plannger] Dynamic Source supportsLimit pushdown

2020-07-22 Thread GitBox


liuyongvs opened a new pull request #12964:
URL: https://github.com/apache/flink/pull/12964


   ## What is the purpose of the change
   
   - make the DynamicSource supports LimitPushDown Rule
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - Added LimitTest to verify the plan
   - Extended LimitITCase (only batch)to verify the result limit projection
   - make the TestValueSource supports LimitPushDown
   
   ## Does this pull request potentially affect one of the following parts:
   
   - Dependencies (does it add or upgrade a dependency): ( no)
   - The public API, i.e., is any changed class annotated with 
@Public(Evolving): (no)
   - The serializers: ( no)
   - The runtime per-record code paths (performance sensitive): (no)
   - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
   - The S3 file system connector: (no)
   
   ## Documentation
   
   - Does this pull request introduce a new feature? (yes)
   - If yes, how is the feature documented? (JavaDocs)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18663) Fix Flink On YARN AM not exit

2020-07-22 Thread tartarus (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163172#comment-17163172
 ] 

tartarus commented on FLINK-18663:
--

The request received by akka seems to be single-threaded. Since flink has 2 
tasks, it is suspected that the timeout exception has a certain relationship 
with this request

> Fix Flink On YARN AM not exit
> -
>
> Key: FLINK-18663
> URL: https://issues.apache.org/jira/browse/FLINK-18663
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.10.0, 1.10.1, 1.11.0
>Reporter: tartarus
>Priority: Major
>  Labels: pull-request-available
> Attachments: 110.png, 111.png, 
> C49A7310-F932-451B-A203-6D17F3140C0D.png, e18e00dd6664485c2ff55284fe969474.png
>
>
> AbstractHandler throw NPE cause by FlinkHttpObjectAggregator is null
> when rest throw exception, it will do this code
> {code:java}
> private CompletableFuture handleException(Throwable throwable, 
> ChannelHandlerContext ctx, HttpRequest httpRequest) {
>   FlinkHttpObjectAggregator flinkHttpObjectAggregator = 
> ctx.pipeline().get(FlinkHttpObjectAggregator.class);
>   int maxLength = flinkHttpObjectAggregator.maxContentLength() - 
> OTHER_RESP_PAYLOAD_OVERHEAD;
>   if (throwable instanceof RestHandlerException) {
>   RestHandlerException rhe = (RestHandlerException) throwable;
>   String stackTrace = ExceptionUtils.stringifyException(rhe);
>   String truncatedStackTrace = Ascii.truncate(stackTrace, 
> maxLength, "...");
>   if (log.isDebugEnabled()) {
>   log.error("Exception occurred in REST handler.", rhe);
>   } else {
>   log.error("Exception occurred in REST handler: {}", 
> rhe.getMessage());
>   }
>   return HandlerUtils.sendErrorResponse(
>   ctx,
>   httpRequest,
>   new ErrorResponseBody(truncatedStackTrace),
>   rhe.getHttpResponseStatus(),
>   responseHeaders);
>   } else {
>   log.error("Unhandled exception.", throwable);
>   String stackTrace = String.format(" side:%n%s%nEnd of exception on server side>",
>   ExceptionUtils.stringifyException(throwable));
>   String truncatedStackTrace = Ascii.truncate(stackTrace, 
> maxLength, "...");
>   return HandlerUtils.sendErrorResponse(
>   ctx,
>   httpRequest,
>   new ErrorResponseBody(Arrays.asList("Internal server 
> error.", truncatedStackTrace)),
>   HttpResponseStatus.INTERNAL_SERVER_ERROR,
>   responseHeaders);
>   }
> }
> {code}
> but flinkHttpObjectAggregator some case is null,so this will throw NPE,but 
> this method called by  AbstractHandler#respondAsLeader
> {code:java}
> requestProcessingFuture
>   .whenComplete((Void ignored, Throwable throwable) -> {
>   if (throwable != null) {
>   
> handleException(ExceptionUtils.stripCompletionException(throwable), ctx, 
> httpRequest)
>   .whenComplete((Void ignored2, Throwable 
> throwable2) -> finalizeRequestProcessing(finalUploadedFiles));
>   } else {
>   finalizeRequestProcessing(finalUploadedFiles);
>   }
>   });
> {code}
>  the result is InFlightRequestTracker Cannot be cleared.
> so the CompletableFuture does‘t complete that handler's closeAsync returned
> !C49A7310-F932-451B-A203-6D17F3140C0D.png!
> !e18e00dd6664485c2ff55284fe969474.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18665) Filesystem connector should use TableSchema exclude computed columns

2020-07-22 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-18665:
-
Affects Version/s: 1.11.1

> Filesystem connector should use TableSchema exclude computed columns
> 
>
> Key: FLINK-18665
> URL: https://issues.apache.org/jira/browse/FLINK-18665
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.2
>
>
> This is reported in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-use-a-nested-column-for-CREATE-TABLE-PARTITIONED-BY-td36796.html
> {code}
> create table navi (
>   a STRING,
>   location ROW
> ) with (
>   'connector' = 'filesystem',
>   'path' = 'east-out',
>   'format' = 'json'
> )
> CREATE TABLE output (
>   `partition` AS location.transId
> ) PARTITIONED BY (`partition`)
> WITH (
>   'connector' = 'filesystem',
>   'path' = 'east-out',
>   'format' = 'json'
> ) LIKE navi (EXCLUDING ALL)
> tEnv.sqlQuery("SELECT type, location FROM navi").executeInsert("output")
> {code}
> It throws the following exception 
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> The field count of logical schema of the table does not match with the field 
> count of physical schema
> . The logical schema: [STRING,ROW<`lastUpdateTime` BIGINT, `transId` STRING>]
> The physical schema: [STRING,ROW<`lastUpdateTime` BIGINT, `transId` 
> STRING>,STRING].
> {code}
> The reason is that {{FileSystemTableFactory#createTableSource}} should use 
> schema excluded computed column, not the original catalog table schema.
> [1]: 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/FileSystemTableFactory.java#L78



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18665) Filesystem connector should use TableSchema exclude computed columns

2020-07-22 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18665.

Resolution: Fixed

master: 675ace47d71ff482a38de66f05e98c53deaef47a

release-1.11: 4e8e542ba2999fc65bae913cfa8141b4d9de9c1b

> Filesystem connector should use TableSchema exclude computed columns
> 
>
> Key: FLINK-18665
> URL: https://issues.apache.org/jira/browse/FLINK-18665
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.2
>
>
> This is reported in 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-use-a-nested-column-for-CREATE-TABLE-PARTITIONED-BY-td36796.html
> {code}
> create table navi (
>   a STRING,
>   location ROW
> ) with (
>   'connector' = 'filesystem',
>   'path' = 'east-out',
>   'format' = 'json'
> )
> CREATE TABLE output (
>   `partition` AS location.transId
> ) PARTITIONED BY (`partition`)
> WITH (
>   'connector' = 'filesystem',
>   'path' = 'east-out',
>   'format' = 'json'
> ) LIKE navi (EXCLUDING ALL)
> tEnv.sqlQuery("SELECT type, location FROM navi").executeInsert("output")
> {code}
> It throws the following exception 
> {code}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> The field count of logical schema of the table does not match with the field 
> count of physical schema
> . The logical schema: [STRING,ROW<`lastUpdateTime` BIGINT, `transId` STRING>]
> The physical schema: [STRING,ROW<`lastUpdateTime` BIGINT, `transId` 
> STRING>,STRING].
> {code}
> The reason is that {{FileSystemTableFactory#createTableSource}} should use 
> schema excluded computed column, not the original catalog table schema.
> [1]: 
> https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/FileSystemTableFactory.java#L78



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12957: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12957:
URL: https://github.com/apache/flink/pull/12957#issuecomment-662341125


   
   ## CI report:
   
   * 027c4cb95690e75cb9a79e08bddcc67ee6d3a9c0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4721)
 
   * 491363e1ed8d85b680cff74a8412fe4e878342b8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4746)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #12954: [FLINK-18665][filesystem] Filesystem connector should use TableSchema exclude computed columns.

2020-07-22 Thread GitBox


JingsongLi merged pull request #12954:
URL: https://github.com/apache/flink/pull/12954


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12958: [FLINK-18625] [ Runtime / Coordination] Maintain redundant taskmanagers to speed up failover

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12958:
URL: https://github.com/apache/flink/pull/12958#issuecomment-662378507


   
   ## CI report:
   
   * 65182ec167f1443aeb1b1f19e0c6c9bce665083c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4724)
 
   * ca4d714f273259e0c1eb42238d455d331ad93996 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4747)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on pull request #12953: [FLINK-18665][filesystem] Filesystem connector should use TableSchema exclude computed columns.

2020-07-22 Thread GitBox


JingsongLi commented on pull request #12953:
URL: https://github.com/apache/flink/pull/12953#issuecomment-662787006


   Thanks @leonardBang for contribution, merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12923: [FLINK-18621][sql-client] Simplify the methods of Executor interface in sql client

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12923:
URL: https://github.com/apache/flink/pull/12923#issuecomment-660060951


   
   ## CI report:
   
   * 63cb77304f929aecf4ec45f3be7baac23bebbccd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4643)
 
   * 81bfbff96af91c7543ab8222cfe6aadb551d3d15 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4745)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KurtYoung commented on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-22 Thread GitBox


KurtYoung commented on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-662787040


   Would `flink-avro-confluent` be a better module name than 
`flink-avro-confluent-registry`? IMO `registry` has nothing to do with the 
format itself.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi merged pull request #12953: [FLINK-18665][filesystem] Filesystem connector should use TableSchema exclude computed columns.

2020-07-22 Thread GitBox


JingsongLi merged pull request #12953:
URL: https://github.com/apache/flink/pull/12953


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12958: [FLINK-18625] [ Runtime / Coordination] Maintain redundant taskmanagers to speed up failover

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12958:
URL: https://github.com/apache/flink/pull/12958#issuecomment-662378507


   
   ## CI report:
   
   * 65182ec167f1443aeb1b1f19e0c6c9bce665083c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4724)
 
   * ca4d714f273259e0c1eb42238d455d331ad93996 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12923: [FLINK-18621][sql-client] Simplify the methods of Executor interface in sql client

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12923:
URL: https://github.com/apache/flink/pull/12923#issuecomment-660060951


   
   ## CI report:
   
   * 63cb77304f929aecf4ec45f3be7baac23bebbccd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4643)
 
   * 81bfbff96af91c7543ab8222cfe6aadb551d3d15 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12957: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12957:
URL: https://github.com/apache/flink/pull/12957#issuecomment-662341125


   
   ## CI report:
   
   * 027c4cb95690e75cb9a79e08bddcc67ee6d3a9c0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4721)
 
   * 491363e1ed8d85b680cff74a8412fe4e878342b8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #12923: [FLINK-18621][sql-client] Simplify the methods of Executor interface in sql client

2020-07-22 Thread GitBox


godfreyhe commented on pull request #12923:
URL: https://github.com/apache/flink/pull/12923#issuecomment-662779443


   > Can rebase master for triggering testing?
   
   sure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18145) Segment optimization does not work in blink ?

2020-07-22 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-18145.
--
Resolution: Invalid

> Segment optimization does not work in blink ?
> -
>
> Key: FLINK-18145
> URL: https://issues.apache.org/jira/browse/FLINK-18145
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / Planner
>Reporter: hehuiyuan
>Priority: Minor
> Attachments: image-2020-06-05-14-56-01-710.png, 
> image-2020-06-05-14-56-48-625.png, image-2020-06-05-14-57-11-287.png, 
> image-2020-06-09-14-58-44-221.png
>
>
> DAG Segement Optimization: 
>  
> !image-2020-06-05-14-56-01-710.png|width=762,height=264!
> Code:
> {code:java}
>   StreamExecutionEnvironment env = EnvUtil.getEnv();
> env.setParallelism(1);
>   env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
>   EnvironmentSettings bsSettings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>   StreamTableEnvironment tableEnv = 
> StreamTableEnvironment.create(env,bsSettings);
>   GeneratorTableSource tableSource = new GeneratorTableSource(2, 1, 70, 0);
>   tableEnv.registerTableSource("myTble",tableSource);
>   Table mytable = tableEnv.scan("myTble");
>   mytable.printSchema();
>   tableEnv.toAppendStream(mytable,Row.class).addSink(new 
> PrintSinkFunction<>()).setParallelism(2);
>   Table tableproc = tableEnv.sqlQuery("SELECT key, count(rowtime_string) as 
> countkey,TUMBLE_START(proctime, INTERVAL '30' SECOND) as tumblestart FROM 
> myTble group by TUMBLE(proctime, INTERVAL '30' SECOND) ,key");
>   tableproc.printSchema();
>   tableEnv.registerTable("t4",tableproc);
>   Table table = tableEnv.sqlQuery("SELECT key,count(rowtime_string) as 
> countkey,TUMBLE_START(proctime,  INTERVAL '24' HOUR) as tumblestart FROM 
> myTble group by TUMBLE(proctime,  INTERVAL '24' HOUR) ,key");
>   table.printSchema();
>   tableEnv.registerTable("t3",table);
>   String[] fields = new String[]{"key","countkey","tumblestart"};
>  TypeInformation[] fieldsType = new TypeInformation[3];
> fieldsType[0] = Types.INT;
> fieldsType[1] = Types.LONG;
>   fieldsType[2] = Types.SQL_TIMESTAMP;
>   PrintTableUpsertSink printTableSink = new 
> PrintTableUpsertSink(fields,fieldsType,true);
> tableEnv.registerTableSink("inserttable",printTableSink);
> tableEnv.sqlUpdate("insert into inserttable  select key,countkey,tumblestart 
> from t3");
>   String[] fieldsproc = new String[]{"key","countkey","tumblestart"};
>   TypeInformation[] fieldsTypeproc = new TypeInformation[3];
>   fieldsTypeproc[0] = Types.INT;
>   fieldsTypeproc[1] = Types.LONG;
>   fieldsTypeproc[2] = Types.SQL_TIMESTAMP;
>   PrintTableUpsertSink printTableSinkproc = new 
> PrintTableUpsertSink(fieldsproc,fieldsTypeproc,true);
>   tableEnv.registerTableSink("inserttableproc",printTableSinkproc);
>   tableEnv.sqlUpdate("insert into inserttableproc  select 
> key,countkey,tumblestart from t4");
> {code}
> I have a custom  table source , then
>     (1) transform datastream to use `toAppendStream` method   , then  sink
>     (2) use tumble ,then sink
>     (3) use another tumbel ,then sink
> but the segement optimization did't work.
>  
> !image-2020-06-05-14-57-11-287.png|width=546,height=388!  
>  
> *The source is executed by 3 threads  and generate duplicate data for 3 times*
>  
> !image-2020-06-05-14-56-48-625.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18145) Segment optimization does not work in blink ?

2020-07-22 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163158#comment-17163158
 ] 

godfrey he commented on FLINK-18145:


I will close this issue

> Segment optimization does not work in blink ?
> -
>
> Key: FLINK-18145
> URL: https://issues.apache.org/jira/browse/FLINK-18145
> Project: Flink
>  Issue Type: Wish
>  Components: Table SQL / Planner
>Reporter: hehuiyuan
>Priority: Minor
> Attachments: image-2020-06-05-14-56-01-710.png, 
> image-2020-06-05-14-56-48-625.png, image-2020-06-05-14-57-11-287.png, 
> image-2020-06-09-14-58-44-221.png
>
>
> DAG Segement Optimization: 
>  
> !image-2020-06-05-14-56-01-710.png|width=762,height=264!
> Code:
> {code:java}
>   StreamExecutionEnvironment env = EnvUtil.getEnv();
> env.setParallelism(1);
>   env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
>   EnvironmentSettings bsSettings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>   StreamTableEnvironment tableEnv = 
> StreamTableEnvironment.create(env,bsSettings);
>   GeneratorTableSource tableSource = new GeneratorTableSource(2, 1, 70, 0);
>   tableEnv.registerTableSource("myTble",tableSource);
>   Table mytable = tableEnv.scan("myTble");
>   mytable.printSchema();
>   tableEnv.toAppendStream(mytable,Row.class).addSink(new 
> PrintSinkFunction<>()).setParallelism(2);
>   Table tableproc = tableEnv.sqlQuery("SELECT key, count(rowtime_string) as 
> countkey,TUMBLE_START(proctime, INTERVAL '30' SECOND) as tumblestart FROM 
> myTble group by TUMBLE(proctime, INTERVAL '30' SECOND) ,key");
>   tableproc.printSchema();
>   tableEnv.registerTable("t4",tableproc);
>   Table table = tableEnv.sqlQuery("SELECT key,count(rowtime_string) as 
> countkey,TUMBLE_START(proctime,  INTERVAL '24' HOUR) as tumblestart FROM 
> myTble group by TUMBLE(proctime,  INTERVAL '24' HOUR) ,key");
>   table.printSchema();
>   tableEnv.registerTable("t3",table);
>   String[] fields = new String[]{"key","countkey","tumblestart"};
>  TypeInformation[] fieldsType = new TypeInformation[3];
> fieldsType[0] = Types.INT;
> fieldsType[1] = Types.LONG;
>   fieldsType[2] = Types.SQL_TIMESTAMP;
>   PrintTableUpsertSink printTableSink = new 
> PrintTableUpsertSink(fields,fieldsType,true);
> tableEnv.registerTableSink("inserttable",printTableSink);
> tableEnv.sqlUpdate("insert into inserttable  select key,countkey,tumblestart 
> from t3");
>   String[] fieldsproc = new String[]{"key","countkey","tumblestart"};
>   TypeInformation[] fieldsTypeproc = new TypeInformation[3];
>   fieldsTypeproc[0] = Types.INT;
>   fieldsTypeproc[1] = Types.LONG;
>   fieldsTypeproc[2] = Types.SQL_TIMESTAMP;
>   PrintTableUpsertSink printTableSinkproc = new 
> PrintTableUpsertSink(fieldsproc,fieldsTypeproc,true);
>   tableEnv.registerTableSink("inserttableproc",printTableSinkproc);
>   tableEnv.sqlUpdate("insert into inserttableproc  select 
> key,countkey,tumblestart from t4");
> {code}
> I have a custom  table source , then
>     (1) transform datastream to use `toAppendStream` method   , then  sink
>     (2) use tumble ,then sink
>     (3) use another tumbel ,then sink
> but the segement optimization did't work.
>  
> !image-2020-06-05-14-57-11-287.png|width=546,height=388!  
>  
> *The source is executed by 3 threads  and generate duplicate data for 3 times*
>  
> !image-2020-06-05-14-56-48-625.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on pull request #12923: [FLINK-18621][sql-client] Simplify the methods of Executor interface in sql client

2020-07-22 Thread GitBox


JingsongLi commented on pull request #12923:
URL: https://github.com/apache/flink/pull/12923#issuecomment-662778272


   Can rebase master for triggering testing?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] austince commented on pull request #12729: [FLINK-10195][connectors/rabbitmq] Allow setting QoS

2020-07-22 Thread GitBox


austince commented on pull request #12729:
URL: https://github.com/apache/flink/pull/12729#issuecomment-662736061


   @dawidwys @aljoscha - hey guys, are you able to take a look at this when you 
have a chance? Thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12963: [FLINK-18421][checkpointing] Fix logging of RejectedExecutionException logging

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12963:
URL: https://github.com/apache/flink/pull/12963#issuecomment-662568757


   
   ## CI report:
   
   * eb8ceafd55cb781d6532752357263430fefaa6fe Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4742)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18646) Managed memory released check can block RPC thread

2020-07-22 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163083#comment-17163083
 ] 

Till Rohrmann commented on FLINK-18646:
---

[~TsReaper] do you know how the high number of GC calls happens? Looking at the 
code, we must have increased the {{sleeps}} counter to 9 and then after calling 
{{System.gc()}} once {{JavaGcCleanerWrapper.tryRunPendingCleaners()}} must 
return {{true}} for several calls which will keep the counter at 9 and calling 
{{System.gc()}} for every pending cleaner. This looks not like an intended 
behavior to me.

[~azagrebin] how long do we have to wait until we can expect that all segments 
have been detected by GC so that we can run the respective cleaners? What do 
you mean with keeping some buffer to compensate for GC? What do you mean with 
hitting the global limit?

> Managed memory released check can block RPC thread
> --
>
> Key: FLINK-18646
> URL: https://issues.apache.org/jira/browse/FLINK-18646
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.11.0
>Reporter: Andrey Zagrebin
>Assignee: Andrey Zagrebin
>Priority: Critical
> Fix For: 1.11.2
>
> Attachments: log1.png, log2.png
>
>
> UnsafeMemoryBudget#verifyEmpty, called on slot freeing, needs time to wait on 
> GC of all allocated/released managed memory. If there are a lot of segments 
> to GC then it can take time to finish the check. If slot freeing happens in 
> RPC thread, the GC waiting can block it and TM risks to miss its heartbeat.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18451) Flink HA on yarn may appear TaskManager double running when HA is restored

2020-07-22 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163079#comment-17163079
 ] 

Till Rohrmann commented on FLINK-18451:
---

I have created FLINK-18677 to track the problem with the 
{{ZooKeeperLeaderRetrievalService}}.

> Flink HA on yarn may appear TaskManager double running when HA is restored
> --
>
> Key: FLINK-18451
> URL: https://issues.apache.org/jira/browse/FLINK-18451
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.9.0
>Reporter: ming li
>Priority: Major
>  Labels: high-availability
>
> We found that when NodeManager is lost, the new JobManager will be restored 
> by Yarn's ResourceManager, and the Leader node will be registered on 
> Zookeeper. The original TaskManager will find the new JobManager through 
> Zookeeper and close the old JobManager connection. At this time, all tasks of 
> the TaskManager will fail. The new JobManager will directly perform job 
> recovery and recover from the latest checkpoint.
> However, during the recovery process, when a TaskManager is abnormally 
> connected to Zookeeper, it is not registered with the new JobManager in time. 
> Before the following timeout:
> 1. Connect with Zookeeper
> 2. Heartbeat with JobManager/ResourceManager
> Task will continue to run (assuming that Task can run independently in 
> TaskManager). Assuming that HA recovers fast enough, some Task double runs 
> will occur at this time.
> Do we need to make a persistent record of the cluster resources we allocated 
> during the runtime, and use it to judge all Task stops when HA is restored?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18451) Flink HA on yarn may appear TaskManager double running when HA is restored

2020-07-22 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163075#comment-17163075
 ] 

Till Rohrmann edited comment on FLINK-18451 at 7/22/20, 9:18 PM:
-

I think one problem of the described problem is the abnormal ZooKeeper 
connection. We don't invalidate the current leader in 
{{ZooKeeperLeaderRetrievalService}} if the ZooKeeper connection becomes 
suspended. This could explain why the {{TaskManager}} does not stop the running 
tasks. This is definitely something we should investigate and then also fix if 
it is indeed a problem.

Concerning the data consumption I am not entirely sure whether I fully 
understand the problematic scenario. Are you concerned about multiple Flink 
tasks reading from an upstream Flink task or about multiple sources reading 
from an external system? The former scenario should not be possible because 
after a restart, the producers will be deployed with a different 
{{ExecutionAttemptID}} which will prevent old tasks from reading their data. 
For the latter case, the external system Flink reads from needs to be 
replayable anyway to support at-least-once or higher processing guarantees. 
Hence, I believe that this should not be a problem.


was (Author: till.rohrmann):
I think one problem of the described problem is the abnormal ZooKeeper 
connection. We don't invalidate the current leader in 
{{ZooKeeperLeaderRetrievalService}} if the ZooKeeper connection becomes 
suspended. This could explain why the {{TaskManager}} does not stop the running 
tasks. This is definitely something we should investigate and then also fix if 
it is indeed a problem.

Concerning the data consumption I am not entirely sure whether I fully 
understand the problematic scenario. Are you concerned about multiple Flink 
tasks reading from an upstream Flink task or about multiple sources reading 
from an external system? The former scenario should not be possible because 
after a restart, the producers will be deployed with a different 
{{ExecutionAttemptID}} which will prevent old tasks from reading their data. 
For the latter case, the external system Flink reads from needs to be 
replayable anyway to support at-least-once or higher processing guarantees. 
Hence, I believe that this should not be a problem.

> Flink HA on yarn may appear TaskManager double running when HA is restored
> --
>
> Key: FLINK-18451
> URL: https://issues.apache.org/jira/browse/FLINK-18451
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.9.0
>Reporter: ming li
>Priority: Major
>  Labels: high-availability
>
> We found that when NodeManager is lost, the new JobManager will be restored 
> by Yarn's ResourceManager, and the Leader node will be registered on 
> Zookeeper. The original TaskManager will find the new JobManager through 
> Zookeeper and close the old JobManager connection. At this time, all tasks of 
> the TaskManager will fail. The new JobManager will directly perform job 
> recovery and recover from the latest checkpoint.
> However, during the recovery process, when a TaskManager is abnormally 
> connected to Zookeeper, it is not registered with the new JobManager in time. 
> Before the following timeout:
> 1. Connect with Zookeeper
> 2. Heartbeat with JobManager/ResourceManager
> Task will continue to run (assuming that Task can run independently in 
> TaskManager). Assuming that HA recovers fast enough, some Task double runs 
> will occur at this time.
> Do we need to make a persistent record of the cluster resources we allocated 
> during the runtime, and use it to judge all Task stops when HA is restored?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18677) ZooKeeperLeaderRetrievalService does not invalidate leader in case of SUSPENDED connection

2020-07-22 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-18677:
-

 Summary: ZooKeeperLeaderRetrievalService does not invalidate 
leader in case of SUSPENDED connection
 Key: FLINK-18677
 URL: https://issues.apache.org/jira/browse/FLINK-18677
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.11.1, 1.10.1, 1.12.0
Reporter: Till Rohrmann
 Fix For: 1.12.0


The {{ZooKeeperLeaderRetrievalService}} does not invalidate the leader if the 
ZooKeeper connection gets SUSPENDED. This means that a {{TaskManager}} won't 
cancel its running tasks even though it might miss a leader change. I think we 
should at least make it configurable whether in such a situation the leader 
listener should be informed about the lost leadership. Otherwise, we might run 
into the situation where an old and a newly recovered instance of a {{Task}} 
can run at the same time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18451) Flink HA on yarn may appear TaskManager double running when HA is restored

2020-07-22 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163075#comment-17163075
 ] 

Till Rohrmann commented on FLINK-18451:
---

I think one problem of the described problem is the abnormal ZooKeeper 
connection. We don't invalidate the current leader in 
{{ZooKeeperLeaderRetrievalService}} if the ZooKeeper connection becomes 
suspended. This could explain why the {{TaskManager}} does not stop the running 
tasks. This is definitely something we should investigate and then also fix if 
it is indeed a problem.

Concerning the data consumption I am not entirely sure whether I fully 
understand the problematic scenario. Are you concerned about multiple Flink 
tasks reading from an upstream Flink task or about multiple sources reading 
from an external system? The former scenario should not be possible because 
after a restart, the producers will be deployed with a different 
{{ExecutionAttemptID}} which will prevent old tasks from reading their data. 
For the latter case, the external system Flink reads from needs to be 
replayable anyway to support at-least-once or higher processing guarantees. 
Hence, I believe that this should not be a problem.

> Flink HA on yarn may appear TaskManager double running when HA is restored
> --
>
> Key: FLINK-18451
> URL: https://issues.apache.org/jira/browse/FLINK-18451
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.9.0
>Reporter: ming li
>Priority: Major
>  Labels: high-availability
>
> We found that when NodeManager is lost, the new JobManager will be restored 
> by Yarn's ResourceManager, and the Leader node will be registered on 
> Zookeeper. The original TaskManager will find the new JobManager through 
> Zookeeper and close the old JobManager connection. At this time, all tasks of 
> the TaskManager will fail. The new JobManager will directly perform job 
> recovery and recover from the latest checkpoint.
> However, during the recovery process, when a TaskManager is abnormally 
> connected to Zookeeper, it is not registered with the new JobManager in time. 
> Before the following timeout:
> 1. Connect with Zookeeper
> 2. Heartbeat with JobManager/ResourceManager
> Task will continue to run (assuming that Task can run independently in 
> TaskManager). Assuming that HA recovers fast enough, some Task double runs 
> will occur at this time.
> Do we need to make a persistent record of the cluster resources we allocated 
> during the runtime, and use it to judge all Task stops when HA is restored?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17024) Make AkkaRpcService#getExecutor return a custom thread pool

2020-07-22 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163057#comment-17163057
 ] 

Till Rohrmann commented on FLINK-17024:
---

I guess the main benefit would be not having to introduce a new {{Executor}} if 
you just want to run some futures outside of the {{RpcService}} thread pool. It 
would be somewhat similar to {{ForkJoinPool#commonPool()}} which is used when 
not specifying an explicit executor with any of the async {{CompletableFuture}} 
calls. But for any heavy workloads it is usually a better idea to introduce a 
dedicated thread pool.

Since it is always a good idea to not offer ways to shoot yourself into the 
foot, it might actually be better to completely remove 
{{RpcService.getExecutor}} and {{RpcService.getScheduledExecutor}} so that 
users don't use it wrongly. Hence, +1 for Chesnay's proposal.

> Make AkkaRpcService#getExecutor return a custom thread pool
> ---
>
> Key: FLINK-17024
> URL: https://issues.apache.org/jira/browse/FLINK-17024
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Yang Wang
>Priority: Major
>
> Follow the discussion in this PR[1].
> Currently, {{AkkaRpcServcie#getExecutor}} returns Akka's underlying 
> dispatcher thread pool. It should not be used since it will take the risk to 
> affect the main thread of rpc endpoint. We could return a custom thread pool 
> instead. The it is safe to used for some I/O operations  in the 
> {{RpcEndpoint}}, for example, {{stopPod}} in {{KubernetesResourceManager}}. 
> An elastic thread pool(from 4 up to 16) is enough.
>  
> [1]. [https://github.com/apache/flink/pull/11427#discussion_r402738678]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12963: [FLINK-18421][checkpointing] Fix logging of RejectedExecutionException logging

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12963:
URL: https://github.com/apache/flink/pull/12963#issuecomment-662568757


   
   ## CI report:
   
   * 26cbc622c2ed413d4e14bbed17e39fb9dd449a80 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4741)
 
   * eb8ceafd55cb781d6532752357263430fefaa6fe Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4742)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dannycranmer commented on a change in pull request #12944: [FLINK-18513][Kinesis] Add AWS SDK v2.x dependency and KinesisProxyV2

2020-07-22 Thread GitBox


dannycranmer commented on a change in pull request #12944:
URL: https://github.com/apache/flink/pull/12944#discussion_r459016637



##
File path: flink-connectors/flink-connector-kinesis/pom.xml
##
@@ -204,6 +224,12 @@ under the License.

com.amazonaws:*

com.google.protobuf:*

org.apache.httpcomponents:*
+   
software.amazon.awssdk:*
+   
software.amazon.eventstream:*
+   
software.amazon.ion:*
+   
org.reactivestreams:*
+   
io.netty:*
+   
com.typesafe.netty:*

Review comment:
   Ok thanks I will take a look and update that





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12963: [FLINK-18421][checkpointing] Fix logging of RejectedExecutionException logging

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12963:
URL: https://github.com/apache/flink/pull/12963#issuecomment-662568757


   
   ## CI report:
   
   * 26cbc622c2ed413d4e14bbed17e39fb9dd449a80 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4741)
 
   * eb8ceafd55cb781d6532752357263430fefaa6fe UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12729: [FLINK-10195][connectors/rabbitmq] Allow setting QoS

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12729:
URL: https://github.com/apache/flink/pull/12729#issuecomment-647037124


   
   ## CI report:
   
   * 790ebc88d93408cba281d3836ea24e6f6bc4a866 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4739)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18675) Checkpoint not maintaining minimum pause duration between checkpoints

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162932#comment-17162932
 ] 

Ravi Bhushan Ratnakar edited comment on FLINK-18675 at 7/22/20, 5:55 PM:
-

As per my understanding of the code, in the CheckpointCordinator class, at line 
number 1512 ,scheduleAtFixedRate method is being used. I think that we should 
use 
"[scheduleWithFixedDelay|#scheduleWithFixedDelay-java.lang.Runnable-long-long-java.util.concurrent.TimeUnit-]{{"}}


was (Author: raviratnakar):
As per my understanding of the code, in the CheckpointCordinator class, at line 
number 1512 ,scheduleAtFixedRate method is being used. I think that we should 
use 
"[scheduleWithFixedDelay|#scheduleWithFixedDelay-java.lang.Runnable-long-long-java.util.concurrent.TimeUnit-]]{{"}}

> Checkpoint not maintaining minimum pause duration between checkpoints
> -
>
> Key: FLINK-18675
> URL: https://issues.apache.org/jira/browse/FLINK-18675
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
> Environment: !image.png!
>Reporter: Ravi Bhushan Ratnakar
>Priority: Critical
> Attachments: image.png
>
>
> I am running a streaming job with Flink 1.11.0 using kubernetes 
> infrastructure. I have configured checkpoint configuration like below
> Interval - 3 minutes
> Minimum pause between checkpoints - 3 minutes
> Checkpoint timeout - 10 minutes
> Checkpointing Mode - Exactly Once
> Number of Concurrent Checkpoint - 1
>  
> Other configs
> Time Characteristics - Processing Time
>  
> I am observing an usual behaviour. *When a checkpoint completes successfully* 
> *and if it's end to end duration is almost equal or greater than Minimum 
> pause duration then the next checkpoint gets triggered immediately without 
> maintaining the Minimum pause duration*. Kindly notice this behaviour from 
> checkpoint id 194 onward in the attached screenshot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18675) Checkpoint not maintaining minimum pause duration between checkpoints

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162932#comment-17162932
 ] 

Ravi Bhushan Ratnakar edited comment on FLINK-18675 at 7/22/20, 5:55 PM:
-

As per my understanding of the code, in the CheckpointCordinator class, at line 
number 1512 ,scheduleAtFixedRate method is being used. I think that we should 
use 
"[scheduleWithFixedDelay|#scheduleWithFixedDelay-java.lang.Runnable-long-long-java.util.concurrent.TimeUnit-]]{{"}}


was (Author: raviratnakar):
As per my understanding of the code, in the CheckpointCordinator class, at line 
number [[#1512]|#L1512]] 
[https://github.com/apache/flink/blob/release-1.11.0/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1512]
 ,scheduleAtFixedRate method is being used. I think that we should use 
"[scheduleWithFixedDelay|#scheduleWithFixedDelay-java.lang.Runnable-long-long-java.util.concurrent.TimeUnit-]]{{"}}

> Checkpoint not maintaining minimum pause duration between checkpoints
> -
>
> Key: FLINK-18675
> URL: https://issues.apache.org/jira/browse/FLINK-18675
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
> Environment: !image.png!
>Reporter: Ravi Bhushan Ratnakar
>Priority: Critical
> Attachments: image.png
>
>
> I am running a streaming job with Flink 1.11.0 using kubernetes 
> infrastructure. I have configured checkpoint configuration like below
> Interval - 3 minutes
> Minimum pause between checkpoints - 3 minutes
> Checkpoint timeout - 10 minutes
> Checkpointing Mode - Exactly Once
> Number of Concurrent Checkpoint - 1
>  
> Other configs
> Time Characteristics - Processing Time
>  
> I am observing an usual behaviour. *When a checkpoint completes successfully* 
> *and if it's end to end duration is almost equal or greater than Minimum 
> pause duration then the next checkpoint gets triggered immediately without 
> maintaining the Minimum pause duration*. Kindly notice this behaviour from 
> checkpoint id 194 onward in the attached screenshot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18675) Checkpoint not maintaining minimum pause duration between checkpoints

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162932#comment-17162932
 ] 

Ravi Bhushan Ratnakar edited comment on FLINK-18675 at 7/22/20, 5:54 PM:
-

As per my understanding of the code, in the CheckpointCordinator class, at line 
number [[#1512]|#L1512]] 
[https://github.com/apache/flink/blob/release-1.11.0/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1512]
 ,scheduleAtFixedRate method is being used. I think that we should use 
"[scheduleWithFixedDelay|#scheduleWithFixedDelay-java.lang.Runnable-long-long-java.util.concurrent.TimeUnit-]]{{"}}


was (Author: raviratnakar):
As per my understanding of the code, in the CheckpointCordinator class, at line 
number 
[1512|[https://github.com/apache/flink/blob/release-1.11.0/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java#L1512]]
 ,scheduleAtFixedRate method is being used. I think that we should use 
"[scheduleWithFixedDelay|[https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ScheduledExecutorService.html#scheduleWithFixedDelay-java.lang.Runnable-long-long-java.util.concurrent.TimeUnit-]]{{"}}

> Checkpoint not maintaining minimum pause duration between checkpoints
> -
>
> Key: FLINK-18675
> URL: https://issues.apache.org/jira/browse/FLINK-18675
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
> Environment: !image.png!
>Reporter: Ravi Bhushan Ratnakar
>Priority: Critical
> Attachments: image.png
>
>
> I am running a streaming job with Flink 1.11.0 using kubernetes 
> infrastructure. I have configured checkpoint configuration like below
> Interval - 3 minutes
> Minimum pause between checkpoints - 3 minutes
> Checkpoint timeout - 10 minutes
> Checkpointing Mode - Exactly Once
> Number of Concurrent Checkpoint - 1
>  
> Other configs
> Time Characteristics - Processing Time
>  
> I am observing an usual behaviour. *When a checkpoint completes successfully* 
> *and if it's end to end duration is almost equal or greater than Minimum 
> pause duration then the next checkpoint gets triggered immediately without 
> maintaining the Minimum pause duration*. Kindly notice this behaviour from 
> checkpoint id 194 onward in the attached screenshot



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18676) Update version of aws to support use of default constructor of "WebIdentityTokenCredentialsProvider"

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Bhushan Ratnakar updated FLINK-18676:
--
Description: 
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws is 1.11.754 . 
The support of default constructor for "WebIdentityTokenCredentialsProvider" is 
provided from aws version 1.11.788 and onward.

  was:
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version| 
[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
 ] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.


> Update version of aws to support use of default constructor of 
> "WebIdentityTokenCredentialsProvider"
> 
>
> Key: FLINK-18676
> URL: https://issues.apache.org/jira/browse/FLINK-18676
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Ravi Bhushan Ratnakar
>Priority: Minor
>
> *Background:*
> I am using Flink 1.11.0 on kubernetes platform. To give access of aws 
> services to taskmanager/jobmanager, we are using "IAM Roles for Service 
> Accounts" . I have configured below property in flink-conf.yaml to use 
> credential provider.
> fs.s3a.aws.credentials.provider: 
> com.amazonaws.auth.WebIdentityTokenCredentialsProvider
>  
> *Issue:*
> When taskmanager/jobmanager is starting up, during this it complains that 
> "WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
> container doesn't come up.
>  
> *Solution:*
> Currently the above credential's class is being used from 
> "*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
> "*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws is 1.11.754 . 
> The support of default constructor for "WebIdentityTokenCredentialsProvider" 
> is provided from aws version 1.11.788 and onward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18676) Update version of aws to support use of default constructor of "WebIdentityTokenCredentialsProvider"

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Bhushan Ratnakar updated FLINK-18676:
--
Description: 
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version| 
[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
 ] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.

  was:
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[[fs.s3.aws.version||#L36] 
[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
 []|#L36] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.


> Update version of aws to support use of default constructor of 
> "WebIdentityTokenCredentialsProvider"
> 
>
> Key: FLINK-18676
> URL: https://issues.apache.org/jira/browse/FLINK-18676
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Ravi Bhushan Ratnakar
>Priority: Minor
>
> *Background:*
> I am using Flink 1.11.0 on kubernetes platform. To give access of aws 
> services to taskmanager/jobmanager, we are using "IAM Roles for Service 
> Accounts" . I have configured below property in flink-conf.yaml to use 
> credential provider.
> fs.s3a.aws.credentials.provider: 
> com.amazonaws.auth.WebIdentityTokenCredentialsProvider
>  
> *Issue:*
> When taskmanager/jobmanager is starting up, during this it complains that 
> "WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
> container doesn't come up.
>  
> *Solution:*
> Currently the above credential's class is being used from 
> "*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
> "*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
> [fs.s3.aws.version| 
> [https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
>  ] is 1.11.754 . The support of default constructor for 
> "WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 
> and onward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18676) Update version of aws to support use of default constructor of "WebIdentityTokenCredentialsProvider"

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Bhushan Ratnakar updated FLINK-18676:
--
Description: 
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[[fs.s3.aws.version||#L36] 
[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
 []|#L36] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.

  was:
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version|#L36]] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.


> Update version of aws to support use of default constructor of 
> "WebIdentityTokenCredentialsProvider"
> 
>
> Key: FLINK-18676
> URL: https://issues.apache.org/jira/browse/FLINK-18676
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Ravi Bhushan Ratnakar
>Priority: Minor
>
> *Background:*
> I am using Flink 1.11.0 on kubernetes platform. To give access of aws 
> services to taskmanager/jobmanager, we are using "IAM Roles for Service 
> Accounts" . I have configured below property in flink-conf.yaml to use 
> credential provider.
> fs.s3a.aws.credentials.provider: 
> com.amazonaws.auth.WebIdentityTokenCredentialsProvider
>  
> *Issue:*
> When taskmanager/jobmanager is starting up, during this it complains that 
> "WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
> container doesn't come up.
>  
> *Solution:*
> Currently the above credential's class is being used from 
> "*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
> "*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
> [[fs.s3.aws.version||#L36] 
> [https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
>  []|#L36] is 1.11.754 . The support of default constructor for 
> "WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 
> and onward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18676) Update version of aws to support use of default constructor of "WebIdentityTokenCredentialsProvider"

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Bhushan Ratnakar updated FLINK-18676:
--
Description: 
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version|#L36]] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.

  was:
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version|[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]]
 is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.


> Update version of aws to support use of default constructor of 
> "WebIdentityTokenCredentialsProvider"
> 
>
> Key: FLINK-18676
> URL: https://issues.apache.org/jira/browse/FLINK-18676
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Ravi Bhushan Ratnakar
>Priority: Minor
>
> *Background:*
> I am using Flink 1.11.0 on kubernetes platform. To give access of aws 
> services to taskmanager/jobmanager, we are using "IAM Roles for Service 
> Accounts" . I have configured below property in flink-conf.yaml to use 
> credential provider.
> fs.s3a.aws.credentials.provider: 
> com.amazonaws.auth.WebIdentityTokenCredentialsProvider
>  
> *Issue:*
> When taskmanager/jobmanager is starting up, during this it complains that 
> "WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
> container doesn't come up.
>  
> *Solution:*
> Currently the above credential's class is being used from 
> "*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
> "*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
> [fs.s3.aws.version|#L36]] is 1.11.754 . The support of default constructor 
> for "WebIdentityTokenCredentialsProvider" is provided from aws version 
> 1.11.788 and onward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18676) Update version of aws to support use of default constructor of "WebIdentityTokenCredentialsProvider"

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Bhushan Ratnakar updated FLINK-18676:
--
Description: 
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version|[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]]
 is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.

  was:
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[[fs.s3.aws.version||#L36]] 
[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
 []|#L36]] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.


> Update version of aws to support use of default constructor of 
> "WebIdentityTokenCredentialsProvider"
> 
>
> Key: FLINK-18676
> URL: https://issues.apache.org/jira/browse/FLINK-18676
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Ravi Bhushan Ratnakar
>Priority: Minor
>
> *Background:*
> I am using Flink 1.11.0 on kubernetes platform. To give access of aws 
> services to taskmanager/jobmanager, we are using "IAM Roles for Service 
> Accounts" . I have configured below property in flink-conf.yaml to use 
> credential provider.
> fs.s3a.aws.credentials.provider: 
> com.amazonaws.auth.WebIdentityTokenCredentialsProvider
>  
> *Issue:*
> When taskmanager/jobmanager is starting up, during this it complains that 
> "WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
> container doesn't come up.
>  
> *Solution:*
> Currently the above credential's class is being used from 
> "*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
> "*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
> [fs.s3.aws.version|[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]]
>  is 1.11.754 . The support of default constructor for 
> "WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 
> and onward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18676) Update version of aws to support use of default constructor of "WebIdentityTokenCredentialsProvider"

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Bhushan Ratnakar updated FLINK-18676:
--
Description: 
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[[fs.s3.aws.version||#L36]] 
[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
 []|#L36]] is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.

  was:
*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version|[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]]
 is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.


> Update version of aws to support use of default constructor of 
> "WebIdentityTokenCredentialsProvider"
> 
>
> Key: FLINK-18676
> URL: https://issues.apache.org/jira/browse/FLINK-18676
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.11.0
>Reporter: Ravi Bhushan Ratnakar
>Priority: Minor
>
> *Background:*
> I am using Flink 1.11.0 on kubernetes platform. To give access of aws 
> services to taskmanager/jobmanager, we are using "IAM Roles for Service 
> Accounts" . I have configured below property in flink-conf.yaml to use 
> credential provider.
> fs.s3a.aws.credentials.provider: 
> com.amazonaws.auth.WebIdentityTokenCredentialsProvider
>  
> *Issue:*
> When taskmanager/jobmanager is starting up, during this it complains that 
> "WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
> container doesn't come up.
>  
> *Solution:*
> Currently the above credential's class is being used from 
> "*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
> "*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
> [[fs.s3.aws.version||#L36]] 
> [https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]
>  []|#L36]] is 1.11.754 . The support of default constructor for 
> "WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 
> and onward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18676) Update version of aws to support use of default constructor of "WebIdentityTokenCredentialsProvider"

2020-07-22 Thread Ravi Bhushan Ratnakar (Jira)
Ravi Bhushan Ratnakar created FLINK-18676:
-

 Summary: Update version of aws to support use of default 
constructor of "WebIdentityTokenCredentialsProvider"
 Key: FLINK-18676
 URL: https://issues.apache.org/jira/browse/FLINK-18676
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / FileSystem
Affects Versions: 1.11.0
Reporter: Ravi Bhushan Ratnakar


*Background:*

I am using Flink 1.11.0 on kubernetes platform. To give access of aws services 
to taskmanager/jobmanager, we are using "IAM Roles for Service Accounts" . I 
have configured below property in flink-conf.yaml to use credential provider.

fs.s3a.aws.credentials.provider: 
com.amazonaws.auth.WebIdentityTokenCredentialsProvider

 

*Issue:*

When taskmanager/jobmanager is starting up, during this it complains that 
"WebIdentityTokenCredentialsProvider" doesn't have "public constructor" and 
container doesn't come up.

 

*Solution:*

Currently the above credential's class is being used from 
"*flink-s3-fs-hadoop"* which gets "aws-java-sdk-core" dependency from 
"*flink-s3-fs-base*". In *"flink-s3-fs-base",*  version of aws 
[fs.s3.aws.version|[https://github.com/apache/flink/blob/release-1.11.0/flink-filesystems/flink-s3-fs-base/pom.xml#L36]]
 is 1.11.754 . The support of default constructor for 
"WebIdentityTokenCredentialsProvider" is provided from aws version 1.11.788 and 
onward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] sjwiesman commented on a change in pull request #12957: [FLINK-18500][table] Make the legacy planner exception more clear whe…

2020-07-22 Thread GitBox


sjwiesman commented on a change in pull request #12957:
URL: https://github.com/apache/flink/pull/12957#discussion_r458966544



##
File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/ParserImpl.java
##
@@ -83,6 +83,6 @@ public UnresolvedIdentifier parseIdentifier(String 
identifier) {
 
@Override
public ResolvedExpression parseSqlExpression(String sqlExpression, 
TableSchema inputSchema) {
-   throw new UnsupportedOperationException();
+   throw new UnsupportedOperationException("Computed column is 
only supported by Blink planner.");

Review comment:
   ```suggestion
throw new UnsupportedOperationException("Computed columns is 
only supported by the Blink planner.");
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-11143) AskTimeoutException is thrown during job submission and completion

2020-07-22 Thread Steven Zhen Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162965#comment-17162965
 ] 

Steven Zhen Wu commented on FLINK-11143:


[~trohrmann] thanks a lot for looking into it. increasing `client.timeout` 
helped. Yeah, it is a little confusing. At least, it would help to point it out 
in the release notes regarding this `web.timeout` vs `client.timeout` nuances. 

> AskTimeoutException is thrown during job submission and completion
> --
>
> Key: FLINK-11143
> URL: https://issues.apache.org/jira/browse/FLINK-11143
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.6.2, 1.10.0
>Reporter: Alex Vinnik
>Priority: Critical
> Attachments: flink-job-timeline.PNG
>
>
> For more details please see the thread
> [http://mail-archives.apache.org/mod_mbox/flink-user/201812.mbox/%3cc2fb26f9-1410-4333-80f4-34807481b...@gmail.com%3E]
> On submission 
> 2018-12-12 02:28:31 ERROR JobsOverviewHandler:92 - Implementation error: 
> Unhandled exception.
>  akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#225683351|#225683351]] after [1 ms]. 
> Sender[null] sent message of type 
> "org.apache.flink.runtime.rpc.messages.LocalFencedMessage".
>  at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)
>  at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
>  at 
> scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
>  at 
> scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
>  at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
>  at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
>  at java.lang.Thread.run(Thread.java:748)
>  
> On completion
>  
> {"errors":["Internal server error."," side:\njava.util.concurrent.CompletionException: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.messages.LocalFencedMessage\".
> at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
> at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
> at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593)
> at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
> at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> at 
> java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
> at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:772)
> at akka.dispatch.OnComplete.internal(Future.scala:258)
> at akka.dispatch.OnComplete.internal(Future.scala:256)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:186)
> at akka.dispatch.japi$CallbackBridge.apply(Future.scala:183)
> at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:83)
> at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:603)
> at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)
> at 
> scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
> at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
> at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
> at 
> akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:329)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:280)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:284)
> at 
> akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:236)
> at java.lang.Thread.run(Thread.java:748)\nCaused by: 
> akka.pattern.AskTimeoutException: Ask timed out on 
> [Actor[akka://flink/user/dispatcher#105638574]] after [1 ms]. 
> Sender[null] sent message of type 
> \"org.apache.flink.runtime.rpc.message

[GitHub] [flink] flinkbot edited a comment on pull request #12963: [FLINK-18421][checkpointing] Fix logging of RejectedExecutionException logging

2020-07-22 Thread GitBox


flinkbot edited a comment on pull request #12963:
URL: https://github.com/apache/flink/pull/12963#issuecomment-662568757


   
   ## CI report:
   
   * 26cbc622c2ed413d4e14bbed17e39fb9dd449a80 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4741)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18158) Add a utility to create a DDL statement from avro schema

2020-07-22 Thread Chen Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162956#comment-17162956
 ] 

Chen Qin commented on FLINK-18158:
--

[~twalthr]what if user have nested struct definition in protobuf/thrift schema?

struct {

   map>> property.

}

> Add a utility to create a DDL statement from avro schema
> 
>
> Key: FLINK-18158
> URL: https://issues.apache.org/jira/browse/FLINK-18158
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> User asked if there is a way to create a TableSchema/Table originating from 
> avro schema. 
> https://lists.apache.org/thread.html/r9bd43449314230fad0b627a170db05284c9727371092fc275fc05b74%40%3Cuser.flink.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12963: [FLINK-18421][checkpointing] Fix logging of RejectedExecutionException logging

2020-07-22 Thread GitBox


flinkbot commented on pull request #12963:
URL: https://github.com/apache/flink/pull/12963#issuecomment-662568757


   
   ## CI report:
   
   * 26cbc622c2ed413d4e14bbed17e39fb9dd449a80 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gaoyunhaii commented on pull request #8357: [FLINK-12175] Change filling of typeHierarchy in analyzePojo, for cor…

2020-07-22 Thread GitBox


gaoyunhaii commented on pull request #8357:
URL: https://github.com/apache/flink/pull/8357#issuecomment-662562203


   Very thanks @andbul and @dawidwys for the PR! I also agree with that we 
could remove the condition and always compute the type hierarchy. There are 
cases that the hierarchy get repeated, for example, if we have 
   ```
   class MyTuple extends Tuple2{
  public String anotherField;
   }
   ```
   We will first try to analyze it as a Tuple and calculated the hierarchy from 
MyTuple -> Tuple2, however we will fail since it has additional fields, the we 
will turn to `analyzePojo` and if we calculated the hierarchy from MyTuple -> 
Object again, the hierarchy will be repeated.
   
   However, I don't think the repeating will cause errors. Besides, it is hard 
to detect if there are repeating with the number of hierarchy, since MyTuple -> 
Object may have arbitrary numbers of types.
   
   I also left some comments for the tests.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >