[jira] [Created] (FLINK-22624) Default resource allocation strategy will allocate more pending task managers than demand

2021-05-10 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-22624:
--

 Summary: Default resource allocation strategy will allocate more 
pending task managers than demand
 Key: FLINK-22624
 URL: https://issues.apache.org/jira/browse/FLINK-22624
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Yangze Guo
 Fix For: 1.14.0, 1.13.1


When the {{DefaultResourceAllocationStrategy}} try to fulfill a requirement 
with allocating new pending task managers. The remaining resource of those task 
managers will never be used to fulfill other requirement, which hurt resource 
utilization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] Release 1.12.4, release candidate #1

2021-05-10 Thread Arvid Heise
Hi everyone,

Please review and vote on the release candidate #1 for the version 1.12.4,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be
deployed to dist.apache.org [2], which are signed with the key with
fingerprint 476DAA5D1FF08189 [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.12.4-rc1" [5],
* website pull request listing the new release and adding announcement blog
post [6].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

Thanks,
Your friendly release manager Arvid

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12350110
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.12.4-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1421
[5] https://github.com/apache/flink/releases/tag/release-1.12.4-rc1
[6] https://github.com/apache/flink-web/pull/446


[jira] [Created] (FLINK-22623) Drop BatchTableSource HBaseTableSource

2021-05-10 Thread Timo Walther (Jira)
Timo Walther created FLINK-22623:


 Summary: Drop BatchTableSource HBaseTableSource
 Key: FLINK-22623
 URL: https://issues.apache.org/jira/browse/FLINK-22623
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Ecosystem, Table SQL / Legacy Planner
Reporter: Timo Walther
Assignee: Timo Walther


The BatchTableSource interface is not supported by the Blink planner. 
Therefore, we drop this class to unblock the removal of the legacy planner. An 
alternative HBase connector using the new interfaces is available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22622) Drop BatchTableSource ParquetTableSource

2021-05-10 Thread Timo Walther (Jira)
Timo Walther created FLINK-22622:


 Summary: Drop BatchTableSource ParquetTableSource
 Key: FLINK-22622
 URL: https://issues.apache.org/jira/browse/FLINK-22622
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Ecosystem, Table SQL / Legacy Planner
Reporter: Timo Walther
Assignee: Timo Walther


The BatchTableSource interface is not supported by the Blink planner. 
Therefore, we drop this class to unblock the removal of the legacy planner. An 
alternative Parque source and sink using the new interfaces is available.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22621) HBaseConnectorITCase.testTableSourceSinkWithDDL unstable on azure

2021-05-10 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-22621:
-

 Summary: HBaseConnectorITCase.testTableSourceSinkWithDDL unstable 
on azure
 Key: FLINK-22621
 URL: https://issues.apache.org/jira/browse/FLINK-22621
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.13.0
Reporter: Roman Khachatryan


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=17763=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=12317]

 
{code:java}
2021-05-10T00:19:41.1703846Z May 10 00:19:41 testTableSourceSinkWithDDL[planner 
= BLINK_PLANNER, legacy = 
false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
2.907 sec  <<< FAILURE!
2021-05-10T00:19:41.1711710Z May 10 00:19:41 java.lang.AssertionError: 
expected:<[+I[1, 10, Hello-1, 100, 1.01, false, Welt-1, 2019-08-18T19:00, 
2019-08-18, 19:00, 12345678.0001], +I[2, 20, Hello-2, 200, 2.02, true, Welt-2, 
2019-08-18T19:01, 2019-08-18, 19:01, 12345678.0002], +I[3, 30, Hello-3, 300, 
3.03, false, Welt-3, 2019-08-18T19:02, 2019-08-18, 19:02, 12345678.0003], +I[4, 
40, null, 400, 4.04, true, Welt-4, 2019-08-18T19:03, 2019-08-18, 19:03, 
12345678.0004], +I[5, 50, Hello-5, 500, 5.05, false, Welt-5, 2019-08-19T19:10, 
2019-08-19, 19:10, 12345678.0005], +I[6, 60, Hello-6, 600, 6.06, true, Welt-6, 
2019-08-19T19:20, 2019-08-19, 19:20, 12345678.0006], +I[7, 70, Hello-7, 700, 
7.07, false, Welt-7, 2019-08-19T19:30, 201 9-08-19, 19:30, 12345678.0007], 
+I[8, 80, null, 800, 8.08, true, Welt-8, 2019-08-19T19:40, 2019-08-19, 19:40, 
12345678.0008]]> but was:<[+I[1, 10,  Hello-1, 100, 1.01, false, Welt-1, 
2019-08-18T19:00, 2019-08-18, 19:00, 12345678.0001], +I[2, 20, Hello-2, 200, 
2.02, true, Welt-2, 2019-08-18T19 :01, 2019-08-18, 19:01, 12345678.0002], +I[3, 
30, Hello-3, 300, 3.03, false, Welt-3, 2019-08-18T19:02, 2019-08-18, 19:02, 
12345678.0003]]>
2021-05-10T00:19:41.1716769Z May 10 00:19:41at 
org.junit.Assert.fail(Assert.java:88)
2021-05-10T00:19:41.1717997Z May 10 00:19:41at 
org.junit.Assert.failNotEquals(Assert.java:834)
2021-05-10T00:19:41.1718744Z May 10 00:19:41at 
org.junit.Assert.assertEquals(Assert.java:118)
2021-05-10T00:19:41.1719472Z May 10 00:19:41at 
org.junit.Assert.assertEquals(Assert.java:144)
2021-05-10T00:19:41.1720270Z May 10 00:19:41at 
org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnecto
 rITCase.java:506)
 {code}
Probably the same or similar to FLINK-19615



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: About the windowOperator and Watermark

2021-05-10 Thread Dawid Wysakowicz
Hi,

When a Watermark arrives the window operator will emit all windows that
are considered finished at the time of the Watermark. In your example
(assuming both windows are finished) they will both be emitted.

Best,

Dawid

On 08/05/2021 08:03, 曲洋 wrote:
> Hi Experts,
>
> Given that a window in the stream is configured with short window size like 
> timeWinodw(3s),
> and I gotta utilize Event time and Periodic Watermark.
> The stream input is [watermark(7) | 6, 5, 3, 4, 1, 2],
> and then two windows are created (3,1,2) (6,5,4) before watermark(7) arriving.
> But in this situation when the current watermark is received,
> which window or how many windows will be be triggered to fire and emit?
> My question is what will the windowOperater do when it comes to two parellel 
> windows edge end timestamps both smaller than cerrent watermark timestamps?



OpenPGP_signature
Description: OpenPGP digital signature


[jira] [Created] (FLINK-22620) Uncouple OrcTableSource from legacy planner

2021-05-10 Thread Timo Walther (Jira)
Timo Walther created FLINK-22620:


 Summary: Uncouple OrcTableSource from legacy planner
 Key: FLINK-22620
 URL: https://issues.apache.org/jira/browse/FLINK-22620
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Ecosystem, Table SQL / Legacy Planner
Reporter: Timo Walther
Assignee: Timo Walther


Not sure how we resolve this, but currently the OrcTableSource is strongly 
coupled with the legacy planner.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22619) Drop usages of BatchTableEnvironment

2021-05-10 Thread Timo Walther (Jira)
Timo Walther created FLINK-22619:


 Summary: Drop usages of BatchTableEnvironment
 Key: FLINK-22619
 URL: https://issues.apache.org/jira/browse/FLINK-22619
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Legacy Planner
Reporter: Timo Walther
Assignee: Timo Walther


Drop usages of BatchTableEnvironment in the project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Releasing Flink 1.12.4

2021-05-10 Thread Till Rohrmann
+1, thanks Arvid.

Cheers,
Till

On Mon, May 10, 2021 at 5:08 AM Thomas Weise  wrote:

> +1
>
> On Fri, May 7, 2021 at 7:00 AM Konstantin Knauf  wrote:
>
> > +1. Thanks Arvid.
> >
> > On Fri, May 7, 2021 at 3:56 PM Chesnay Schepler 
> > wrote:
> >
> > > +1 for a releasing 1.12.4 ASAP.
> > >
> > > On 5/7/2021 3:46 PM, Arvid Heise wrote:
> > > > Dear devs,
> > > >
> > > > To address the licencing issue of 1.12.3 [1] and the misbundled scala
> > > 2.12,
> > > > I'm proposing to start releasing 1.12.4 on next Monday.
> > > >
> > > > I'd volunteer as a release manager again.
> > > >
> > > > Best,
> > > >
> > > > Arvid
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-22555
> > > >
> > >
> > >
> >
> > --
> >
> > Konstantin Knauf
> >
> > https://twitter.com/snntrable
> >
> > https://github.com/knaufk
> >
>


[jira] [Created] (FLINK-22618) Fix incorrect free resource metrics of task managers

2021-05-10 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-22618:
--

 Summary: Fix incorrect free resource metrics of task managers
 Key: FLINK-22618
 URL: https://issues.apache.org/jira/browse/FLINK-22618
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Yangze Guo
 Fix For: 1.14.0, 1.13.1


In FLINK-21177, the {{SlotManager#getFreeResourceOf}} wrongly return the total 
resource.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Task Local Recovery with mountable disks in the cloud

2021-05-10 Thread Till Rohrmann
Hi Sonam,

I think it would be great to create a FLIP for this feature. FLIPs don't
have to be super large and in this case, I could see it work to express the
general idea to make local recovery work across TaskManager failures and
then outline the different ideas we had so far. If we then decide to go
with the persisting of cache information (the AllocationIDs), then this
could be a good outcome. If we decide to go with the more complex solution
of telling the ResourceManager and JobMaster about the ranges of cached
state data, then this is also ok.

Cheers,
Till

On Fri, May 7, 2021 at 6:30 PM Sonam Mandal  wrote:

> Hi Till,
>
> Thanks for getting back to me. Apologies for my delayed response.
>
> Thanks for confirming that the slot ID (Allocation ID) is indeed necessary
> today for task local recovery to kick in, and thanks for your insights on
> how to make this work.
>
> We are interested in exploring this disaggregation between local state
> storage and slots to allow potential reuse of local state even when TMs go
> down.
>
> I'm planning to spend some time exploring the Flink code around local
> recovery and state persistence. I'm still new to Flink, so any guidance
> will be helpful. I think both of your ideas on how to make this happen
> are interesting and worth exploring. What's the procedure to collaborate or
> get guidance on this feature? Will a FLIP be required, or will opening a
> ticket do?
>
> Thanks,
> Sonam
> --
> *From:* Till Rohrmann 
> *Sent:* Monday, April 26, 2021 10:24 AM
> *To:* dev 
> *Cc:* u...@flink.apache.org ; Sonam Mandal <
> soman...@linkedin.com>
> *Subject:* Re: Task Local Recovery with mountable disks in the cloud
>
> Hi Sonam,
>
> sorry for the late reply. We were a bit caught in the midst of the feature
> freeze for the next major Flink release.
>
> In general, I think it is a very good idea to disaggregate the local state
> storage to make it reusable across TaskManager failures. However, it is
> also not trivial to do.
>
> Maybe let me first describe how the current task local recovery works and
> then see how we could improve it:
>
> Flink creates for every slot allocation an AllocationID. The AllocationID
> associates a slot on a TaskExecutor with a job and is also used for scoping
> the lifetime of a slot wrt a job (theoretically, one and the same slot
> could be used to fulfill multiple slot requests of the same job if the slot
> allocation is freed in between). Note that the AllocationID is a random ID
> and, thus, changes whenever the ResourceManager allocates a new slot on a
> TaskExecutor for a job.
>
> Task local recovery is effectively a state cache which is associated with
> an AllocationID. So for every checkpoint and every task, a TaskExecutor
> copies the state data and stores them in the task local recovery cache. The
> cache is maintained as long as the slot allocation is valid (e.g. the slot
> has not been freed by the JobMaster and the slot has not timed out). This
> makes the lifecycle management of the state data quite easy and makes sure
> that a process does not clutter local disks. On the JobMaster side, Flink
> remembers for every Execution, where it is deployed (it remembers the
> AllocationID). If a failover happens, then Flink tries to re-deploy the
> Executions into the slots they were running in before by matching the
> AllocationIDs.
>
> The reason why we scoped the state cache to an AllocationID was for
> simplicity and because we couldn't guarantee that a failed TaskExecutor X
> will be restarted on the same machine again and thereby having access to
> the same local disk as before. That's also why Flink deletes the cache
> directory when a slot is freed or when the TaskExecutor is shut down
> gracefully.
>
> With persistent volumes this changes and we can make the TaskExecutors
> "stateful" in the sense that we can reuse an already occupied cache. One
> rather simple idea could be to also persist the slot allocations of a
> TaskExecutor (which slot is allocated and what is its assigned
> AllocationID). This information could be used to re-initialize the
> TaskExecutor upon restart. That way, it does not have to register at the
> ResourceManager and wait for new slot allocations but could directly start
> offering its slots to the jobs it remembered. If the TaskExecutor cannot
> find the JobMasters for the respective jobs, it would then free the slots
> and clear the cache accordingly.
>
> This could work as long as the ResourceManager does not start new
> TaskExecutors whose slots could be used to recover the job. If this is a
> problem, then one needs to answer the question how long to wait for the old
> TaskExecutors to come back and reusing their local state vs. starting
> quickly a fresh instance but having to restore state remotely.
>
> An alternative solution proposal which is probably more powerful albeit
> also more complex would be to make the cache information explicit when
> registering the 

[jira] [Created] (FLINK-22617) Add log when create bulk format

2021-05-10 Thread hehuiyuan (Jira)
hehuiyuan created FLINK-22617:
-

 Summary: Add log when create bulk format 
 Key: FLINK-22617
 URL: https://issues.apache.org/jira/browse/FLINK-22617
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Reporter: hehuiyuan


 Hive table sink  has some log that tells us whether to use native or mapred .

 

 
{code:java}
LOG.info("Hive streaming sink: Use MapReduce RecordWriter writer.");
LOG.info("Hive streaming sink: Use native parquet writer.");
LOG.info(
 "Hive streaming sink: Use MapReduce RecordWriter writer because BulkWriter 
Factory not available.");
{code}
 

I have some ideas we can add log  to make it more  obvious when read hive for 
`createBulkFormatForSplit`.

 

!image-2021-05-10-17-04-15-571.png|width=490,height=198!  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22616) flink1.13提交任务后,出现以下问题

2021-05-10 Thread Leo (Jira)
Leo created FLINK-22616:
---

 Summary: flink1.13提交任务后,出现以下问题
 Key: FLINK-22616
 URL: https://issues.apache.org/jira/browse/FLINK-22616
 Project: Flink
  Issue Type: Bug
 Environment: CloseableIterator closeableIterator = 
env.addSource(new MyRichSource("table1"),types).executeAndCollect();

closeableIterator.forEachRemaining(new Consumer() {
@Override
public void accept(Row row) {
System.out.println(row);
}
});
提交到flink集群后
commit任务 出现 这个

2021-05-10 16:25:46,757 WARN  
org.apache.flink.client.deployment.application.DetachedApplicationRunner [] - 
Could not execute application: 
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Job client must be a CoordinationRequestGateway. This is a bug.
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) 
~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:84)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:70)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:102)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
 [?:1.8.0_261]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_261]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_261]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_261]
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:1.8.0_261]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_261]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_261]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_261]
Caused by: java.lang.IllegalArgumentException: Job client must be a 
CoordinationRequestGateway. This is a bug.
at 
org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:138) 
~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.setJobClient(CollectResultFetcher.java:89)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.setJobClient(CollectResultIterator.java:101)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1349)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at 
org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1291)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
at com.love.leo.engine.EngineDemo1.main(EngineDemo1.java:46) ~[?:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
~[?:1.8.0_261]
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
~[?:1.8.0_261]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_261]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_261]
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
 ~[flink-dist_2.11-1.13.0.jar:1.13.0]
... 13 more
2021-05-10 16:25:46,776 ERROR 
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler   [] - Exception 
occurred in REST handler: Could not execute application.
2021-05-10 16:25:48,238 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Source: 
Custom Source -> Sink: Data stream collect sink (1/1) 
(730df20731ef5520a40a017032f74c46) switched from DEPLOYING to INITIALIZING.
Reporter: Leo


CloseableIterator closeableIterator = env.addSource(new 
MyRichSource("table1"),types).executeAndCollect();

closeableIterator.forEachRemaining(new Consumer() {
@Override
public void accept(Row row) {
System.out.println(row);
}
});
提交到flink集群后
commit任务 出现 这个

2021-05-10 16:25:46,757 WARN  

[jira] [Created] (FLINK-22615) can't use test harness to test Scala UDF because of two versions of ProcessWindowFunction

2021-05-10 Thread Zhenhao Li (Jira)
Zhenhao Li created FLINK-22615:
--

 Summary: can't use test harness to test Scala UDF because of two 
versions of ProcessWindowFunction
 Key: FLINK-22615
 URL: https://issues.apache.org/jira/browse/FLINK-22615
 Project: Flink
  Issue Type: Improvement
Reporter: Zhenhao Li


I'm trying to use `InternalSingleValueProcessWindowFunction` to test a UDF of 
class
```
org.apache.flink.streaming.api.scala.function.ProcessWindowFunction
```

But the constructor of `InternalSingleValueProcessWindowFunction` expect 
```
org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction and
```

I'm wondering if it is possible to use a common abstraction for Java and Scala 
in those test utilities. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22614) Provide build artifacts for YARN e2e tests

2021-05-10 Thread Matthias (Jira)
Matthias created FLINK-22614:


 Summary: Provide build artifacts for YARN e2e tests
 Key: FLINK-22614
 URL: https://issues.apache.org/jira/browse/FLINK-22614
 Project: Flink
  Issue Type: Improvement
Reporter: Matthias


The YARN e2e don't publish the artifacts if the build failed. Instead, we have 
to rely on the printout of the watchdog process that prints all the artifacts' 
content to stdout. This is tedious when analyzing specific artififacts as we 
have to jump to specific lines in a single log file instead of selecting the 
relevant artifact from the build artifact archive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22613) FlinkKinesisITCase.testStopWithSavepoint fails

2021-05-10 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-22613:
-

 Summary: FlinkKinesisITCase.testStopWithSavepoint fails
 Key: FLINK-22613
 URL: https://issues.apache.org/jira/browse/FLINK-22613
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kinesis
Affects Versions: 1.13.0
Reporter: Guowei Ma



{code:java}
2021-05-10T03:09:18.4601182Z May 10 03:09:18 [ERROR] 
testStopWithSavepoint(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisITCase)
  Time elapsed: 3.526 s  <<< FAILURE!
2021-05-10T03:09:18.4601884Z May 10 03:09:18 java.lang.AssertionError: 
2021-05-10T03:09:18.4605902Z May 10 03:09:18 
2021-05-10T03:09:18.4616154Z May 10 03:09:18 Expected: a collection with size a 
value less than <10>
2021-05-10T03:09:18.4616818Z May 10 03:09:18  but: collection size <10> was 
equal to <10>
2021-05-10T03:09:18.4618087Z May 10 03:09:18at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
2021-05-10T03:09:18.4618702Z May 10 03:09:18at 
org.junit.Assert.assertThat(Assert.java:956)
2021-05-10T03:09:18.4619467Z May 10 03:09:18at 
org.junit.Assert.assertThat(Assert.java:923)
2021-05-10T03:09:18.4620391Z May 10 03:09:18at 
org.apache.flink.streaming.connectors.kinesis.FlinkKinesisITCase.testStopWithSavepoint(FlinkKinesisITCase.java:126)
2021-05-10T03:09:18.4621115Z May 10 03:09:18at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-05-10T03:09:18.4621751Z May 10 03:09:18at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-05-10T03:09:18.4622475Z May 10 03:09:18at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-05-10T03:09:18.4623142Z May 10 03:09:18at 
java.lang.reflect.Method.invoke(Method.java:498)
2021-05-10T03:09:18.4623783Z May 10 03:09:18at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2021-05-10T03:09:18.4624514Z May 10 03:09:18at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2021-05-10T03:09:18.4625246Z May 10 03:09:18at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2021-05-10T03:09:18.4625967Z May 10 03:09:18at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2021-05-10T03:09:18.4626671Z May 10 03:09:18at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2021-05-10T03:09:18.4627349Z May 10 03:09:18at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-05-10T03:09:18.4627979Z May 10 03:09:18at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-05-10T03:09:18.4628582Z May 10 03:09:18at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2021-05-10T03:09:18.4629251Z May 10 03:09:18at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2021-05-10T03:09:18.4629950Z May 10 03:09:18at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2021-05-10T03:09:18.4630616Z May 10 03:09:18at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2021-05-10T03:09:18.4631339Z May 10 03:09:18at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2021-05-10T03:09:18.4631986Z May 10 03:09:18at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2021-05-10T03:09:18.4632630Z May 10 03:09:18at 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2021-05-10T03:09:18.4633269Z May 10 03:09:18at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2021-05-10T03:09:18.4634016Z May 10 03:09:18at 
org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
2021-05-10T03:09:18.4634786Z May 10 03:09:18at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-05-10T03:09:18.4635412Z May 10 03:09:18at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-05-10T03:09:18.4635995Z May 10 03:09:18at 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2021-05-10T03:09:18.4636656Z May 10 03:09:18at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2021-05-10T03:09:18.4637398Z May 10 03:09:18at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2021-05-10T03:09:18.4638141Z May 10 03:09:18at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2021-05-10T03:09:18.4638869Z May 10 03:09:18at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2021-05-10T03:09:18.4639619Z May 10 03:09:18at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2021-05-10T03:09:18.4640392Z May 10 03:09:18at