[
https://issues.apache.org/jira/browse/BEAM-7872?focusedWorklogId=288862&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-288862
]
ASF GitHub Bot logged work on BEAM-7872:
----------------------------------------
Author: ASF GitHub Bot
Created on: 05/Aug/19 11:10
Start Date: 05/Aug/19 11:10
Worklog Time Spent: 10m
Work Description: lgajowy commented on pull request #9213: [BEAM-7872]
Simpler Flink cluster set up in load tests
URL: https://github.com/apache/beam/pull/9213#discussion_r310546705
##########
File path: .test-infra/jenkins/Flink.groovy
##########
@@ -19,10 +19,74 @@
import CommonJobProperties as common
import CommonTestProperties.SDK
-class Infrastructure {
+class Flink {
+ private static final String repositoryRoot =
'gcr.io/apache-beam-testing/beam_portability'
+ private static final String dockerTag = 'latest'
+ private static final String jobServerImageTag =
"${repositoryRoot}/flink-job-server:${dockerTag}"
+ private static final String flinkVersion = '1.7'
+ private static final String flinkDownloadUrl =
'https://archive.apache.org/dist/flink/flink-1.7.0/flink-1.7.0-bin-hadoop28-scala_2.11.tgz'
+
+ private static def job
+ private static String jobName
+
+ /**
+ * Returns SDK Harness image tag to be used as an environment_config in the
job definition.
+ *
+ * @param sdk - SDK
+ */
+ static String getSDKHarnessImageTag(SDK sdk) {
+ switch (sdk) {
+ case CommonTestProperties.SDK.PYTHON:
+ return "${repositoryRoot}/python:${dockerTag}"
+ case CommonTestProperties.SDK.JAVA:
+ return "${repositoryRoot}/java:${dockerTag}"
+ default:
+ String sdkName = sdk.name().toLowerCase()
+ throw new IllegalArgumentException("${sdkName} SDK is not supported")
+ }
+ }
+
+ /**
+ * Creates Flink cluster and specifies cleanup steps.
+ *
+ * @param job - jenkins job
+ * @param jobName - string to be used as a base for cluster name
+ * @param sdk - SDK
+ * @param workerCount - the initial number of worker nodes excluding one
extra node for Flink's Job Manager
+ * @param slotsPerTaskmanager - the number of slots per Flink task manager
+ */
+ static Flink setUp(job, String jobName, SDK sdk, Integer workerCount,
Integer slotsPerTaskmanager = 1) {
Review comment:
There is one important thing to reconsider: sometimes users of this
interface may need a cluster without sdk harness and the job server. Due to
that reason, I think we cannot do all those things (prepare harness, prepare
Job server, setup, teardown) in one setUp method.
This is something to rethink carefully but from the top of my head the
interface could have the folowing methods:
- create(job, jobName): this will simply initialize the object. If there is
no logic in that method but simple initialization of fields it might be a
public constructor as well imo,
- prepareSDKHarness, prepareJobServer: those should be separate public
methods (?). Notice that one Flink cluster can require multiple types of sdk
harnesses in the future (python/java/go at the same time)
- setupFlinkCluster: specifies start and teardown steps (teardown stays
private).
Alternatively, we could have a builder class (?) there that will allow
configuring if the cluster should run with the harness + job server/ without
it.
Let me know what you think and let's agree on something, wdyt?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 288862)
Time Spent: 2h 40m (was: 2.5h)
> Simpler Flink cluster set up in load tests
> ------------------------------------------
>
> Key: BEAM-7872
> URL: https://issues.apache.org/jira/browse/BEAM-7872
> Project: Beam
> Issue Type: Sub-task
> Components: testing
> Reporter: Kamil Wasilewski
> Assignee: Kamil Wasilewski
> Priority: Major
> Time Spent: 2h 40m
> Remaining Estimate: 0h
>
> Creating a new load test running on Flink runner could be easier by providing
> a single `setUp` function which would encapsulate the process of creating
> Flink cluster and registering teardown steps
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)