[ 
https://issues.apache.org/jira/browse/BEAM-7214?focusedWorklogId=238353&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-238353
 ]

ASF GitHub Bot logged work on BEAM-7214:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 07/May/19 10:08
            Start Date: 07/May/19 10:08
    Worklog Time Spent: 10m 
      Work Description: mxm commented on pull request #8511: [BEAM-7214] Run 
Python validates runner tests on Spark
URL: https://github.com/apache/beam/pull/8511#discussion_r281561719
 
 

 ##########
 File path: sdks/python/apache_beam/runners/portability/spark_runner_test.py
 ##########
 @@ -0,0 +1,148 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+from __future__ import absolute_import
+from __future__ import print_function
+
+import argparse
+import logging
+import sys
+import unittest
+from shutil import rmtree
+from tempfile import mkdtemp
+
+from apache_beam.options.pipeline_options import DebugOptions
+from apache_beam.options.pipeline_options import PortableOptions
+from apache_beam.runners.portability import portable_runner
+from apache_beam.runners.portability import portable_runner_test
+
+if __name__ == '__main__':
+  # Run as
+  #
+  # python -m apache_beam.runners.portability.spark_runner_test \
+  #     --spark_job_server_jar=/path/to/job_server.jar \
+  #     [SparkRunnerTest.test_method, ...]
+
+  parser = argparse.ArgumentParser(add_help=True)
+  parser.add_argument('--spark_job_server_jar',
+                      help='Job server jar to submit jobs.')
+  parser.add_argument('--environment_type', default='docker',
+                      help='Environment type. docker or process')
+  parser.add_argument('--environment_config', help='Environment config.')
+  parser.add_argument('--extra_experiments', default=[], action='append',
+                      help='Beam experiments config.')
+  known_args, args = parser.parse_known_args(sys.argv)
+  sys.argv = args
+
+  spark_job_server_jar = known_args.spark_job_server_jar
+  environment_type = known_args.environment_type.lower()
+  environment_config = (
+      known_args.environment_config if known_args.environment_config else None)
+  extra_experiments = known_args.extra_experiments
+
+  # This is defined here to only be run when we invoke this file explicitly.
+  class SparkRunnerTest(portable_runner_test.PortableRunnerTest):
+    _use_grpc = True
+    _use_subprocesses = True
+
+    @classmethod
+    def _subprocess_command(cls, job_port, expansion_port):
+      # will be cleaned up at the end of this method, and recreated and used by
+      # the job server
+      tmp_dir = mkdtemp(prefix='sparktest')
+
+      try:
+        return [
+            'java',
+            '-Dbeam.spark.test.reuseSparkContext=true',
+            '-jar', spark_job_server_jar,
+            '--spark-master-url', 'local',
+            '--artifacts-dir', tmp_dir,
+            '--job-port', str(job_port),
+            '--artifact-port', '0',
+            '--expansion-port', str(expansion_port),
+        ]
+      finally:
+        rmtree(tmp_dir)
+
+    @classmethod
+    def get_runner(cls):
+      return portable_runner.PortableRunner()
+
+    def create_options(self):
+      options = super(SparkRunnerTest, self).create_options()
+      options.view_as(DebugOptions).experiments = [
+          'beam_fn_api'] + extra_experiments
+      options._all_options['parallelism'] = 1
+      options._all_options['shutdown_sources_on_final_watermark'] = True
 
 Review comment:
   You probably don't need those for Spark.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 238353)
    Time Spent: 20m  (was: 10m)

> Run Python validates runner tests on Spark
> ------------------------------------------
>
>                 Key: BEAM-7214
>                 URL: https://issues.apache.org/jira/browse/BEAM-7214
>             Project: Beam
>          Issue Type: New Feature
>          Components: runner-spark
>            Reporter: Kyle Weaver
>            Assignee: Kyle Weaver
>            Priority: Major
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> We will need something like FlinkRunnerTest [1] to verify that the Spark 
> runner can run Python pipelines correctly.
> [1] 
> [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/portability/flink_runner_test.py]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to