Re: [Proposal] Beam MultimapState API

2022-10-31 Thread Luke Cwik via dev
Thanks, I took a look and left some comments.

On Mon, Oct 31, 2022 at 12:47 PM Ahmet Altay  wrote:

> Thank you for the message Buqian. Adding @Reuven Lax  
> @Lukasz
> Cwik  explicitly (who are mentioned on the doc).
>
> On Mon, Oct 31, 2022 at 12:17 PM 郑卜千  wrote:
>
>> Gentle ping. Thanks!
>>
>> On Thu, Oct 27, 2022 at 2:55 PM 郑卜千  wrote:
>>
>>> Hi all,
>>>
>>> I've been working on adding MultimapState support to the Dataflow
>>> Runner, and the state interface is currently missing from the Beam State
>>> API.
>>>
>>> I have an one pager proposing its API interface in
>>> https://docs.google.com/document/d/1zm16QCxWEWNy4qW1KKoA3DmQBOLrTSPimXiqQkTxokc/edit#.
>>> Please share suggestions/comments!
>>>
>>> Thanks!
>>> Buqian Zheng
>>>
>>>


Re: [ACTION REQUESTED] Triage 3-10 issues (get the labels right, basically)

2022-10-31 Thread Kenneth Knowles
Thanks for the help! I should have linked to it to temp everyone to click:
https://github.com/apache/beam/issues?q=is%3Aissue+is%3Aopen+label%3A%22awaiting+triage%22

Kenn

On Mon, Oct 31, 2022 at 1:30 PM Yi Hu via dev  wrote:

> up to 120, my bad
>
> On Mon, Oct 31, 2022 at 4:29 PM Yi Hu  wrote:
>
>> Thanks for raising this. Also closed a couple of issues. Now the number
>> is up to 12.
>>
>> On Mon, Oct 31, 2022 at 9:24 AM Manu Zhang 
>> wrote:
>>
>>> I closed several issues that have already been fixed. It looks we didn’t
>>> reference the issue number in commit message which can auto-close the issue.
>>>
>>> Kenneth Knowles 于2022年10月31日 周一20:01写道:
>>>
 Hi all

 We have built up to 138 GitHub Issues with the "awaiting triage" label.
 If you can, take just a couple minutes today to grab a couple and triage
 them:

- get the right priority label (
https://beam.apache.org/contribute/issue-priorities/)
- get the right component label (runner, IO, SDK, etc)
- ping someone who may be interested
- remember to remove the "awaiting triage" label

 It really is pretty quick!

 Kenn

>>>


Re: SSL issue: Kafka Avro write with Schema Registry (GCP)

2022-10-31 Thread Ahmet Altay via dev
(moving this to the user list, dev list to bcc.)

Adding relevant people: @John Casey .

(Keshav, for Dataflow issues you could also reach out to Dataflow support:
https://cloud.google.com/dataflow/docs/support)

On Mon, Oct 31, 2022 at 1:23 PM Chennakeshavlu Maddela <
chennakeshavlu.madd...@davita.com> wrote:

> Hi Team,
>
>
>
> We are setting up avro write to a kafka topic with confluent schema
> registry on SSL, its throwing (below) error.
>
>
>
> We are using SASL_SSL with PEM certificate for connecting Kafka broker,
> which is working fine with non-avro kafka topics. Can you please help us
> with configuring SSL for schema registry? (we are using dataflow runner)
>
>
>
> Please let me know if you need more details.
>
>
>
> Thank you,
>
> Keshav
>
>
>
> *Exception:*
>
> Failed to send HTTP request to endpoint:
> https://confluent-schemaregistry-.com/subjects/topic-value?deleted=false
>
>
>
> javax.net.ssl.SSLHandshakeException: PKIX path building failed:
> sun.security.provider.certpath.SunCertPathBuilderException: unable to find
> valid certification path to requested target
>
> at
> java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
>
> at
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:350)
>
> at
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:293)
>
> at
> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:288)
>
> at
> java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1356)
>
> at
> java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1231)
>
> at
> java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1174)
>
> at
> java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
>
> at
> java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
>
> at
> java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)
>
> at
> java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:183)
>
> at
> java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:171)
>
> at
> java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1408)
>
> at
> java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1314)
>
> at
> java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
>
> at
> java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:411)
>
> at
> java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:567)
>
> at
> java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>
> at
> java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1367)
>
> at
> java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1342)
>
> at
> java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:246)
>
> at
> io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:199)
>
> at
> io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:256)
>
> at
> io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:323)
>
> at
> io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:311)
>
> at
> io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getIdFromRegistry(CachedSchemaRegistryClient.java:191)
>
> at
> io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getId(CachedSchemaRegistryClient.java:323)
>
> at
> io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:73)
>
> at
> io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53)
>
> at
> org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:62)
>
> at
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:952)
>
> at
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:912)
>
> at
> com.davita.cwow.pmt.transformations.PmtAvroKafkaWriter$KafkaWriteEvaluationFn.processElement(PmtAvroKafkaWriter.java:86)
>
> at
> 

Re: [PROPOSAL] Preparing for Apache Beam 2.43.0 Release

2022-10-31 Thread Chamikara Jayalath via dev
Update:

Hi All,

I've been validating the release branch by running all Jenkins test suites
on it (as required by the release guide). This revealed two new
potential issues. I added these to the release milestone [1]. Please
comment on these issues if you are familiar with the errors (for example,
if they are known issues from a previous release). We can continue the
release once these are resolved or moved out of the 2.43.0 release
milestone.

Thanks,
Cham

[1] https://github.com/apache/beam/milestone/5

On Wed, Oct 26, 2022 at 12:42 PM Chamikara Jayalath 
wrote:

> Update:
>
> All blocking issues have either been addressed or pushed to the next
> release. I'll go ahead and create the first RC.
>
> Thanks,
> Cham
>
> On Thu, Oct 20, 2022 at 9:41 AM Chamikara Jayalath 
> wrote:
>
>> Hi All,
>>
>> The release branch was cut:
>> https://github.com/apache/beam/tree/release-2.43.0
>>
>> We currently have three open blockers in the release milestone:
>> https://github.com/apache/beam/milestone/5
>>
>> I'll look into cherry-picking fixes for these and hopefully creating a RC
>> early next week.
>>
>> Thanks,
>> Cham
>>
>> On Wed, Oct 5, 2022 at 3:25 PM Ahmet Altay  wrote:
>>
>>> +1 - Thank you Cham!
>>>
>>> On Wed, Oct 5, 2022 at 1:38 PM Chamikara Jayalath via dev <
>>> dev@beam.apache.org> wrote:
>>>
 Hi all,

 The next (2.43.0) release branch cut is scheduled for October 19th,
 according to the release calendar [1].

 I would like to volunteer myself to do this release. My plan is to cut
 the branch on that date, and cherrypick release-blocking fixes afterwards,
 if any.

 Please help me make sure the release goes smoothly by:
 - Making sure that any unresolved release blocking issues for 2.43.0 should
 have their "Milestone" marked as "2.43.0 Release" [2] as soon as
 possible.
 - Reviewing the current release blockers [2] and removing the Milestone
 if they don't meet the criteria at [3].

 Let me know if you have any comments/objections/questions.

 Thanks,
 Cham

 [1]
 https://calendar.google.com/calendar/u/0/embed?src=0p73sl034k80oob7seouani...@group.calendar.google.com
 [2] https://github.com/apache/beam/milestone/5
 [3] https://beam.apache.org/contribute/release-blocking/

>>>


Re: [ACTION REQUESTED] Triage 3-10 issues (get the labels right, basically)

2022-10-31 Thread Yi Hu via dev
up to 120, my bad

On Mon, Oct 31, 2022 at 4:29 PM Yi Hu  wrote:

> Thanks for raising this. Also closed a couple of issues. Now the number is
> up to 12.
>
> On Mon, Oct 31, 2022 at 9:24 AM Manu Zhang 
> wrote:
>
>> I closed several issues that have already been fixed. It looks we didn’t
>> reference the issue number in commit message which can auto-close the issue.
>>
>> Kenneth Knowles 于2022年10月31日 周一20:01写道:
>>
>>> Hi all
>>>
>>> We have built up to 138 GitHub Issues with the "awaiting triage" label.
>>> If you can, take just a couple minutes today to grab a couple and triage
>>> them:
>>>
>>>- get the right priority label (
>>>https://beam.apache.org/contribute/issue-priorities/)
>>>- get the right component label (runner, IO, SDK, etc)
>>>- ping someone who may be interested
>>>- remember to remove the "awaiting triage" label
>>>
>>> It really is pretty quick!
>>>
>>> Kenn
>>>
>>


Re: [ACTION REQUESTED] Triage 3-10 issues (get the labels right, basically)

2022-10-31 Thread Yi Hu via dev
Thanks for raising this. Also closed a couple of issues. Now the number is
up to 12.

On Mon, Oct 31, 2022 at 9:24 AM Manu Zhang  wrote:

> I closed several issues that have already been fixed. It looks we didn’t
> reference the issue number in commit message which can auto-close the issue.
>
> Kenneth Knowles 于2022年10月31日 周一20:01写道:
>
>> Hi all
>>
>> We have built up to 138 GitHub Issues with the "awaiting triage" label.
>> If you can, take just a couple minutes today to grab a couple and triage
>> them:
>>
>>- get the right priority label (
>>https://beam.apache.org/contribute/issue-priorities/)
>>- get the right component label (runner, IO, SDK, etc)
>>- ping someone who may be interested
>>- remember to remove the "awaiting triage" label
>>
>> It really is pretty quick!
>>
>> Kenn
>>
>


SSL issue: Kafka Avro write with Schema Registry (GCP)

2022-10-31 Thread Chennakeshavlu Maddela
Hi Team,

We are setting up avro write to a kafka topic with confluent schema registry on 
SSL, its throwing (below) error.

We are using SASL_SSL with PEM certificate for connecting Kafka broker, which 
is working fine with non-avro kafka topics. Can you please help us with 
configuring SSL for schema registry? (we are using dataflow runner)

Please let me know if you need more details.

Thank you,
Keshav

Exception:
Failed to send HTTP request to endpoint: 
https://confluent-schemaregistry-.com/subjects/topic-value?deleted=false

javax.net.ssl.SSLHandshakeException: PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
valid certification path to requested target
at 
java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:350)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:293)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:288)
at 
java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1356)
at 
java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1231)
at 
java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1174)
at 
java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
at 
java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
at 
java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)
at 
java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:183)
at 
java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:171)
at 
java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1408)
at 
java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1314)
at 
java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at 
java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:411)
at 
java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:567)
at 
java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1367)
at 
java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1342)
at 
java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:246)
at 
io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:199)
at 
io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:256)
at 
io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:323)
at 
io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:311)
at 
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getIdFromRegistry(CachedSchemaRegistryClient.java:191)
at 
io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getId(CachedSchemaRegistryClient.java:323)
at 
io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:73)
at 
io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:53)
at 
org.apache.kafka.common.serialization.Serializer.serialize(Serializer.java:62)
at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:952)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:912)
at 
com.davita.cwow.pmt.transformations.PmtAvroKafkaWriter$KafkaWriteEvaluationFn.processElement(PmtAvroKafkaWriter.java:86)
at 
com.davita.cwow.pmt.transformations.PmtAvroKafkaWriter$KafkaWriteEvaluationFn$DoFnInvoker.invokeProcessElement(Unknown
 Source)
at 
org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:211)
at 
org.apache.beam.runners.dataflow.worker.repackaged.org.apache.beam.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:188)
at 

Re: [Proposal] Beam MultimapState API

2022-10-31 Thread Ahmet Altay via dev
Thank you for the message Buqian. Adding @Reuven Lax  @Lukasz
Cwik  explicitly (who are mentioned on the doc).

On Mon, Oct 31, 2022 at 12:17 PM 郑卜千  wrote:

> Gentle ping. Thanks!
>
> On Thu, Oct 27, 2022 at 2:55 PM 郑卜千  wrote:
>
>> Hi all,
>>
>> I've been working on adding MultimapState support to the Dataflow Runner,
>> and the state interface is currently missing from the Beam State API.
>>
>> I have an one pager proposing its API interface in
>> https://docs.google.com/document/d/1zm16QCxWEWNy4qW1KKoA3DmQBOLrTSPimXiqQkTxokc/edit#.
>> Please share suggestions/comments!
>>
>> Thanks!
>> Buqian Zheng
>>
>>


Re: [Proposal] Beam MultimapState API

2022-10-31 Thread 郑卜千
Gentle ping. Thanks!

On Thu, Oct 27, 2022 at 2:55 PM 郑卜千  wrote:

> Hi all,
>
> I've been working on adding MultimapState support to the Dataflow Runner,
> and the state interface is currently missing from the Beam State API.
>
> I have an one pager proposing its API interface in
> https://docs.google.com/document/d/1zm16QCxWEWNy4qW1KKoA3DmQBOLrTSPimXiqQkTxokc/edit#.
> Please share suggestions/comments!
>
> Thanks!
> Buqian Zheng
>
>


Issues with passing complex JVM opts into process_variables in PortablePipelineOptions

2022-10-31 Thread Bharath Kumara Subramanian
Hi,

We are leveraging beam portability in java for achieving isolation between
user and framework(runner) code.

Here is how we spin up the user java process as a separate process

portableOptions.setJobServerConfig(PortableUtils.getServerDriverArgs().toArray(new
String[0]));
> portableOptions.setRunner(TestPortableRunner.class);
> portableOptions.setDefaultEnvironmentType(Environments.ENVIRONMENT_PROCESS);
> portableOptions.setEnvironmentOptions(
> ImmutableList.of(
> "process_command=" + getWorkerProcessCommand(),
> "process_variables=LOG4J2_FILE_NAME=log4j2.xml,JAVA_OPTS="));
>
>

The JAVA_OPTS is currently empty and we run into issues when we populate
JAVA_OPTS with remote debugging flags such as agentlib:jdwp

I investigated a bit into why it doesn't work and it seems like the parsing
logic in environment doesn't handle complex process variables that have *,.*
Here is the snippet of the parsing logic

private static Map
getProcessVariables(PortablePipelineOptions options) {
>   ImmutableMap.Builder variables = ImmutableMap.builder();
>   String assignments =
>   PortablePipelineOptions.getEnvironmentOption(options, 
> processVariablesOption);
>   for (String assignment : assignments.split(",", -1)) {
> String[] tokens = assignment.split("=", -1);
> if (tokens.length == 1) {
>   throw new IllegalArgumentException(
>   String.format("Process environment variable '%s' is not assigned a 
> value.", tokens[0]));
> }
> variables.put(tokens[0], tokens[1]);
>   }
>   return variables.build();
> }
>
>
Questions for the community

   1.  Is there any other way to pass in configurations the starting
   process? I tried DefaultEnvironmentConfiguration although it can't be used
   in conjunction with EnvironmentOptions and ran into different issues around
   external worker URL w/ DefaultEnvironmentConfiguration
   2. I noticed similar issues with Google Container Tools (
   https://github.com/GoogleContainerTools/jib/pull/3522/files) and it got
   fixed by enhancing the parsing logic. Is the path forward to fix the above
   parsing logic in OSS?


Thanks,
Bharath


Re: [ACTION REQUESTED] Triage 3-10 issues (get the labels right, basically)

2022-10-31 Thread Manu Zhang
I closed several issues that have already been fixed. It looks we didn’t
reference the issue number in commit message which can auto-close the issue.

Kenneth Knowles 于2022年10月31日 周一20:01写道:

> Hi all
>
> We have built up to 138 GitHub Issues with the "awaiting triage" label. If
> you can, take just a couple minutes today to grab a couple and triage them:
>
>- get the right priority label (
>https://beam.apache.org/contribute/issue-priorities/)
>- get the right component label (runner, IO, SDK, etc)
>- ping someone who may be interested
>- remember to remove the "awaiting triage" label
>
> It really is pretty quick!
>
> Kenn
>


[ACTION REQUESTED] Triage 3-10 issues (get the labels right, basically)

2022-10-31 Thread Kenneth Knowles
Hi all

We have built up to 138 GitHub Issues with the "awaiting triage" label. If
you can, take just a couple minutes today to grab a couple and triage them:

   - get the right priority label (
   https://beam.apache.org/contribute/issue-priorities/)
   - get the right component label (runner, IO, SDK, etc)
   - ping someone who may be interested
   - remember to remove the "awaiting triage" label

It really is pretty quick!

Kenn


Beam High Priority Issue Report (51)

2022-10-31 Thread beamactions
This is your daily summary of Beam's current high priority issues that may need 
attention.

See https://beam.apache.org/contribute/issue-priorities for the meaning and 
expectations around issue priorities.

Unassigned P1 Issues:

https://github.com/apache/beam/issues/23815 [Bug]: Neo4j tests failing
https://github.com/apache/beam/issues/23745 [Bug]: Samza 
AsyncDoFnRunnerTest.testSimplePipeline is flaky
https://github.com/apache/beam/issues/23709 [Flake]: Spark batch flakes in 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElement and 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundle
https://github.com/apache/beam/issues/22969 Discrepancy in behavior of 
`DoFn.process()` when `yield` is combined with `return` statement, or vice versa
https://github.com/apache/beam/issues/22321 
PortableRunnerTestWithExternalEnv.test_pardo_large_input is regularly failing 
on jenkins
https://github.com/apache/beam/issues/21713 404s in BigQueryIO don't get output 
to Failed Inserts PCollection
https://github.com/apache/beam/issues/21561 
ExternalPythonTransformTest.trivialPythonTransform flaky
https://github.com/apache/beam/issues/21469 beam_PostCommit_XVR_Flink flaky: 
Connection refused
https://github.com/apache/beam/issues/21462 Flake in 
org.apache.beam.sdk.io.mqtt.MqttIOTest.testReadObject: Address already in use
https://github.com/apache/beam/issues/21261 
org.apache.beam.runners.dataflow.worker.fn.logging.BeamFnLoggingServiceTest.testMultipleClientsFailingIsHandledGracefullyByServer
 is flaky
https://github.com/apache/beam/issues/21260 Python DirectRunner does not emit 
data at GC time
https://github.com/apache/beam/issues/21123 Multiple jobs running on Flink 
session cluster reuse the persistent Python environment.
https://github.com/apache/beam/issues/21113 
testTwoTimersSettingEachOtherWithCreateAsInputBounded flaky
https://github.com/apache/beam/issues/20976 
apache_beam.runners.portability.flink_runner_test.FlinkRunnerTestOptimized.test_flink_metrics
 is flaky
https://github.com/apache/beam/issues/20975 
org.apache.beam.runners.flink.ReadSourcePortableTest.testExecution[streaming: 
false] is flaky
https://github.com/apache/beam/issues/20974 Python GHA PreCommits flake with 
grpc.FutureTimeoutError on SDK harness startup
https://github.com/apache/beam/issues/20689 Kafka commitOffsetsInFinalize OOM 
on Flink
https://github.com/apache/beam/issues/20108 Python direct runner doesn't emit 
empty pane when it should
https://github.com/apache/beam/issues/19814 Flink streaming flakes in 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundleStateful and 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElementStateful
https://github.com/apache/beam/issues/19734 
WatchTest.testMultiplePollsWithManyResults flake: Outputs must be in timestamp 
order (sickbayed)
https://github.com/apache/beam/issues/19241 Python Dataflow integration tests 
should export the pipeline Job ID and console output to Jenkins Test Result 
section


P1 Issues with no update in the last week:

https://github.com/apache/beam/issues/23627 [Bug]: Website precommit flaky
https://github.com/apache/beam/issues/23525 [Bug]: Default PubsubMessage coder 
will drop message id and orderingKey
https://github.com/apache/beam/issues/23489 [Bug]: add DebeziumIO to the 
connectors page
https://github.com/apache/beam/issues/23306 [Bug]: BigQueryBatchFileLoads in 
python loses data when using WRITE_TRUNCATE
https://github.com/apache/beam/issues/23286 [Bug]: 
beam_PerformanceTests_InfluxDbIO_IT Flaky > 50 % Fail 
https://github.com/apache/beam/issues/22913 [Bug]: 
beam_PostCommit_Java_ValidatesRunner_Flink is flakes in 
org.apache.beam.sdk.transforms.GroupByKeyTest$BasicTests.testAfterProcessingTimeContinuationTriggerUsingState
https://github.com/apache/beam/issues/22891 [Bug]: 
beam_PostCommit_XVR_PythonUsingJavaDataflow is flaky
https://github.com/apache/beam/issues/22605 [Bug]: Beam Python failure for 
dataflow_exercise_metrics_pipeline_test.ExerciseMetricsPipelineTest.test_metrics_it
https://github.com/apache/beam/issues/22299 [Bug]: JDBCIO Write freeze at 
getConnection() in WriteFn
https://github.com/apache/beam/issues/22115 [Bug]: 
apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses
 is flaky
https://github.com/apache/beam/issues/22011 [Bug]: 
org.apache.beam.sdk.io.aws2.kinesis.KinesisIOWriteTest.testWriteFailure flaky
https://github.com/apache/beam/issues/21893 [Bug]: BigQuery Storage Write API 
implementation does not support table partitioning
https://github.com/apache/beam/issues/21714 
PulsarIOTest.testReadFromSimpleTopic is very flaky
https://github.com/apache/beam/issues/21711 Python Streaming job failing to 
drain with BigQueryIO write errors
https://github.com/apache/beam/issues/21709 
beam_PostCommit_Java_ValidatesRunner_Samza Failing
https://github.com/apache/beam/issues/21708 beam_PostCommit_Java_DataflowV2, 
testBigQueryStorageWrite30MProto failing