mxm commented on a change in pull request #12499:
URL: https://github.com/apache/beam/pull/12499#discussion_r467943622
##########
File path: .test-infra/jenkins/job_LoadTests_ParDo_Flink_Python.groovy
##########
@@ -161,12 +164,13 @@ def streamingScenarios = { datasetName ->
test : 'apache_beam.testing.load_tests.pardo_test',
runner : CommonTestProperties.Runner.PORTABLE,
pipelineOptions: [
- job_name : 'load-tests-python-flink-streaming-pardo-5-' +
now,
+ job_name : 'load-tests-python-flink-streaming-pardo-1-' +
now,
Review comment:
Note, this is just the job name. More important is the table we are
writing to further down. Unfortunately, the Grafana setup forces me to do that.
I would rather not change this but the Grafana setup is very inflexible and in
this regard a regression from the old framework we used:
https://apache-beam-testing.appspot.com/explore?dashboard=5751884853805056
>Your streaming tests are a bit problematic, because they are not being run
on Dataflow and batch.
I don't fully understand your point to be honest, in order for the dropdown
menus to work properly, i.e. choosing `SDK` and the `mode` (batch/streaming),
this change is required because the table name is composed of `$sdk_$mode_`.
The test parameters looked identical to me for Dataflow/Flink. If the
iterations don't match, we can adjust that. The input is already the same.
Adding more charts would be another option. We have to remove the streaming
dropdown and just add one chart per streaming and batch run. I think that is
the best option. It gives us a bit more flexibility.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]