[
https://issues.apache.org/jira/browse/BEAM-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17027900#comment-17027900
]
Valentyn Tymofieiev commented on BEAM-9085:
-------------------------------------------
It appears that the increase in time is likely caused by either generating the
synthetic input, or reading the synthetic input.
The problem can be reproduced on Direct runner as well, I don't think it's
Dataflow-specific.
Following command shows 14 (Py2) vs 40 (Py3) seconds difference on my machine.
{noformat}
python setup.py nosetests \
--test-pipeline-options="
--iterations=10
--number_of_counters=1
--number_of_counter_operations=1
--project=big-query-project
--publish_to_big_query=false
--metrics_dataset=python_load_tests
--metrics_table=pardo
--input_options='{
\"num_records\": 200000,
\"key_size\": 10,
\"value_size\":90,
\"bundle_size_distribution_type\": \"const\",
\"bundle_size_distribution_param\": 1,
\"force_initial_num_bundles\": 0
}'" \
--tests apache_beam.testing.load_tests.pardo_test
{noformat}
> Investigate performance difference between Python 2/3 on Dataflow
> -----------------------------------------------------------------
>
> Key: BEAM-9085
> URL: https://issues.apache.org/jira/browse/BEAM-9085
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-core
> Reporter: Kamil Wasilewski
> Assignee: Valentyn Tymofieiev
> Priority: Major
>
> Tests show that the performance of core Beam operations in Python 3.x on
> Dataflow can be a few time slower than in Python 2.7. We should investigate
> what's the cause of the problem.
> Currently, we have one ParDo test that is run both in Py3 and Py2 [1]. A
> dashboard with runtime results can be found here [2].
> [1] sdks/python/apache_beam/testing/load_tests/pardo_test.py
> [2] https://apache-beam-testing.appspot.com/explore?dashboard=5678187241537536
--
This message was sent by Atlassian Jira
(v8.3.4#803005)