[ 
https://issues.apache.org/jira/browse/BEAM-6908?focusedWorklogId=240445&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-240445
 ]

ASF GitHub Bot logged work on BEAM-6908:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 10/May/19 22:02
            Start Date: 10/May/19 22:02
    Worklog Time Spent: 10m 
      Work Description: tvalentyn commented on pull request #8518: [BEAM-6908] 
Refactor Python performance test groovy file for easy configuration
URL: https://github.com/apache/beam/pull/8518#discussion_r283054650
 
 

 ##########
 File path: .test-infra/jenkins/job_PerformanceTests_Python.groovy
 ##########
 @@ -18,46 +18,107 @@
 
 import CommonJobProperties as commonJobProperties
 
-// This job runs the Beam Python performance tests on PerfKit Benchmarker.
-job('beam_PerformanceTests_Python'){
-  // Set default Beam job properties.
-  commonJobProperties.setTopLevelMainJobProperties(delegate)
-
-  // Run job in postcommit every 6 hours, don't trigger every push.
-  commonJobProperties.setAutoJob(
-      delegate,
-      'H */6 * * *')
-
-  // Allows triggering this build against pull requests.
-  commonJobProperties.enablePhraseTriggeringFromPullRequest(
-      delegate,
-      'Python SDK Performance Test',
-      'Run Python Performance Test')
-
-  def pipelineArgs = [
-      project: 'apache-beam-testing',
-      staging_location: 'gs://temp-storage-for-end-to-end-tests/staging-it',
-      temp_location: 'gs://temp-storage-for-end-to-end-tests/temp-it',
-      output: 'gs://temp-storage-for-end-to-end-tests/py-it-cloud/output'
-  ]
-  def pipelineArgList = []
-  pipelineArgs.each({
-    key, value -> pipelineArgList.add("--$key=$value")
-  })
-  def pipelineArgsJoined = pipelineArgList.join(',')
-
-  def argMap = [
-      beam_sdk                 : 'python',
-      benchmarks               : 'beam_integration_benchmark',
-      bigquery_table           : 'beam_performance.wordcount_py_pkb_results',
-      beam_it_class            : 
'apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it',
-      beam_it_module           : 'sdks/python',
-      beam_prebuilt            : 'true',  // skip beam prebuild
-      beam_python_sdk_location : 'build/apache-beam.tar.gz',
-      beam_runner              : 'TestDataflowRunner',
-      beam_it_timeout          : '1200',
-      beam_it_args             : pipelineArgsJoined,
-  ]
-
-  commonJobProperties.buildPerformanceTest(delegate, argMap)
+
+class PerformanceTestConfigurations {
+  String jobName
+  String jobDescription
+  String jobTriggerPhrase
+  String buildSchedule = 'H */6 * * *'  // every 6 hours
+  String benchmarkName = 'beam_integration_benchmark'
+  String sdk = 'python'
+  String bigqueryTable
+  String itClass
+  String itModule
 
 Review comment:
   Ok, but how do we know which gradle module to select? I see that you used a 
different value for Py2 and Py3 benchmark, how did you pick those specific 
ones? How does a person writing an new benchmark decides how to fill this value?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 240445)
    Time Spent: 12h 20m  (was: 12h 10m)

> Add Python3 performance benchmarks
> ----------------------------------
>
>                 Key: BEAM-6908
>                 URL: https://issues.apache.org/jira/browse/BEAM-6908
>             Project: Beam
>          Issue Type: Sub-task
>          Components: testing
>            Reporter: Mark Liu
>            Assignee: Mark Liu
>            Priority: Major
>          Time Spent: 12h 20m
>  Remaining Estimate: 0h
>
> Similar to 
> [beam_PerformanceTests_Python|https://builds.apache.org/view/A-D/view/Beam/view/PerformanceTests/job/beam_PerformanceTests_Python/],
>  we want to have a Python3 benchmark running on Jenkins to detect performance 
> regression during code adoption.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to