[ 
https://issues.apache.org/jira/browse/BEAM-9085?focusedWorklogId=419351&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419351
 ]

ASF GitHub Bot logged work on BEAM-9085:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Apr/20 11:36
            Start Date: 09/Apr/20 11:36
    Worklog Time Spent: 10m 
      Work Description: kamilwu commented on pull request #11092: [BEAM-9085] 
Fix performance regression in SyntheticSource
URL: https://github.com/apache/beam/pull/11092#discussion_r406142245
 
 

 ##########
 File path: sdks/python/apache_beam/testing/synthetic_pipeline.py
 ##########
 @@ -61,6 +65,35 @@
   np = None
 
 
+class _Random(Random):
+  """A subclass of `random.Random` from the Python Standard Library that
+  provides a method returning random bytes of arbitrary length.
+  """
+
+  # `numpy.random.RandomState` does not provide `random()` method, we keep this
+  # for compatibility reasons.
+  random_sample = Random.random
+
+  def bytes(self, length):
+    """Returns random bytes.
+
+    Args:
+      length (int): Number of random bytes.
+    """
+    n = length // 8 + 1
+    # pylint: disable=map-builtin-not-iterating
+    return struct.pack(
+        '{}Q'.format(n),
+        *map(self.getrandbits, itertools.repeat(64, n)))[:length]
 
 Review comment:
   How about not using chunks at all? I did some tests again and, surprisingly, 
it looks like we don't need them. 
   Here's a test with a set of different chunk sizes:
   `for CHUNK_SIZE in {4,8,32,64}; do python -m timeit -s "import random; 
import sys; chunk_size=$CHUNK_SIZE; len=10; num_chunks=len//chunk_size+1" 
'b"".join([random.getrandbits(chunk_size * 8).to_bytes(chunk_size, 
sys.byteorder) for _ in range(num_chunks)])[:len]'; done
   `
   
   Results:
   for len==10:
   200000 loops, best of 5: 1.62 usec per loop
   200000 loops, best of 5: 1.34 usec per loop
   200000 loops, best of 5: 1.02 usec per loop
   200000 loops, best of 5: 1.19 usec per loop
   
   for len==1000:
   5000 loops, best of 5: 87.7 usec per loop
   5000 loops, best of 5: 50.7 usec per loop
   20000 loops, best of 5: 16.7 usec per loop
   20000 loops, best of 5: 11 usec per loop
   
   And without chunks:
   `python -m timeit -s "import random; import sys; len=10" 
'random.getrandbits(len * 8).to_bytes(len, sys.byteorder)'`
   
   for len==10:
   1000000 loops, best of 5: 358 nsec per loop
   
   for len==1000:
   50000 loops, best of 5: 4.5 usec per loop
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 419351)
    Time Spent: 8h 20m  (was: 8h 10m)

> Performance regression in np.random.RandomState() skews performance test 
> results across Python 2/3 on Dataflow
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: BEAM-9085
>                 URL: https://issues.apache.org/jira/browse/BEAM-9085
>             Project: Beam
>          Issue Type: Bug
>          Components: testing
>            Reporter: Kamil Wasilewski
>            Assignee: Kamil Wasilewski
>            Priority: Major
>          Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> Tests show that the performance of core Beam operations in Python 3.x on 
> Dataflow can be a few time slower than in Python 2.7. We should investigate 
> what's the cause of the problem.
> Currently, we have one ParDo test that is run both in Py3 and Py2 [1]. A 
> dashboard with runtime results can be found here [2].
> [1] sdks/python/apache_beam/testing/load_tests/pardo_test.py
> [2] https://apache-beam-testing.appspot.com/explore?dashboard=5678187241537536



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to