kamilwu commented on a change in pull request #12542:
URL: https://github.com/apache/beam/pull/12542#discussion_r471557407



##########
File path: .test-infra/jenkins/job_LoadTests_coGBK_Python.groovy
##########
@@ -147,25 +147,30 @@ def loadTestConfigurations = { datasetName ->
         autoscaling_algorithm: 'NONE'
       ]
     ],
-  ].each { test -> test.pipelineOptions.putAll(additionalPipelineArgs) }
+  ]
+  .each { test -> test.pipelineOptions.putAll(additionalPipelineArgs) }
+  .each { test -> (mode) != 'streaming' ?: addStreamingOptions(test) }
 }
 
-def batchLoadTestJob = { scope, triggeringContext ->
-  scope.description('Runs Python CoGBK load tests on Dataflow runner in batch 
mode')
-  commonJobProperties.setTopLevelMainJobProperties(scope, 'master', 240)
+def addStreamingOptions(test) {
+  // Use highmem workers to prevent out of memory issues.
+  test.pipelineOptions << [streaming: null,
+    worker_machine_type: 'n1-highmem-4'

Review comment:
       My pipelines kept crushing because of OutOfMemory exceptions. So I 
followed the advice given in this article: 
https://cloud.google.com/community/tutorials/dataflow-debug-oom-conditions
   
   Other solution was to, according to the article, use fewer number of threads 
per worker. But I couldn't have found a pipeline option responsible for that in 
Python SDK (it exists in Java SDK, though).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to