Assuming that not many tests deadlock I think it should be fine to simply
let the build process deadlock. Even if multiple tests fail consistently,
then one would see them one after another. That way we wouldn't have to
build some extra tooling. Moreover, the behaviour would be consistent on
the
David Anderson created FLINK-22489:
--
Summary: subtask backpressure indicator shows value for entire job
Key: FLINK-22489
URL: https://issues.apache.org/jira/browse/FLINK-22489
Project: Flink
ZRCoder created FLINK-22490:
---
Summary: YarnApplicationClusterEntryPoint does not pass
configuration parameters
Key: FLINK-22490
URL: https://issues.apache.org/jira/browse/FLINK-22490
Project: Flink
Leon Hao created FLINK-22491:
Summary: JdbcBatchingOutputFormat checks size of buffer in
TableBufferReducedStatementExecutor
Key: FLINK-22491
URL: https://issues.apache.org/jira/browse/FLINK-22491
Dawid Wysakowicz created FLINK-22492:
Summary: KinesisTableApiITCase with wrong results
Key: FLINK-22492
URL: https://issues.apache.org/jira/browse/FLINK-22492
Project: Flink
Issue Type:
+1 (binding)
I'm not aware of any release blockers. Me and my colleagues have checked
quite extensively this release for correctness of the unaligned
checkpoints, finally nailing down the two remaining known bugs.
I have manually checked the WebUI with it's new back-pressure
monitoring tool. One
+1 (binding)
- Verified checksums and signatures
- Reviewed the website PR
- Built from sources
- verified dependency version upgrades and updates in NOTICE files compared to
1.12.2
- started cluster and run WordCount example in BATCH mode and everything looked
good
On 23/04/2021 23:52, Arvid
Till Rohrmann created FLINK-22495:
-
Summary: Document how to use the reactive mode on K8s
Key: FLINK-22495
URL: https://issues.apache.org/jira/browse/FLINK-22495
Project: Flink
Issue Type:
Just to add to Dong Lin's list of cons of allowing timeout:
- Any timeout value that you manually set is arbitrary. If it's set too
low, you get test instabilities. What too low means depends on numerous
factors, such as hardware and current utilization (especially I/O). If you
run in VMs and the
Matthias created FLINK-22494:
Summary: Avoid discarding checkpoints in case of failure
Key: FLINK-22494
URL: https://issues.apache.org/jira/browse/FLINK-22494
Project: Flink
Issue Type:
flink lib jar:
flink taskmanager log:
hive orc table create ddl:
create table xxx ... stored as orc
flink java class, Join the hive Orc table and Kafka stream data:
bsTableEnv.executeSql("my sql is join");flink pom.xml in the attachment.
I convert the file type to textfile type to run. But
+1 (binding)
- started cluster, ran example job on macos
- sources look fine
- Eyeballed the diff:
https://github.com/apache/flink/compare/release-1.12.2...release-1.12.3-rc1.
According to "git diff release-1.12.2...release-1.12.3-rc1 '*.xml'", there
was only one external dependency change
ranqiqiang created FLINK-22499:
--
Summary: JDBC sink table-api support "sink.parallelism" ?
Key: FLINK-22499
URL: https://issues.apache.org/jira/browse/FLINK-22499
Project: Flink
Issue Type:
Carl created FLINK-22498:
Summary: cast the primary key for source table that has a decimal
primary key as string, and then insert into a kudu table that has a string
primary key throw the exception : UpsertStreamTableSink requires that Table has
a
Guowei Ma created FLINK-22496:
-
Summary:
ClusterEntrypointTest.testCloseAsyncShouldBeExecutedInShutdownHook failed
Key: FLINK-22496
URL: https://issues.apache.org/jira/browse/FLINK-22496
Project: Flink
There is one more point that may be useful to consider here.
In order to debug deadlock that is not easily reproducible, it is likely
not sufficient to see only the thread dump to figure out the root cause. We
likely need to enable the INFO level logging. Since AZP does not provide
INFO level
Dawid Wysakowicz created FLINK-22493:
Summary: AdaptiveSchedulerITCase found unexpected files
Key: FLINK-22493
URL: https://issues.apache.org/jira/browse/FLINK-22493
Project: Flink
Issue
17 matches
Mail list logo