[ 
https://issues.apache.org/jira/browse/BEAM-9547?focusedWorklogId=610301&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-610301
 ]

ASF GitHub Bot logged work on BEAM-9547:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Jun/21 07:37
            Start Date: 14/Jun/21 07:37
    Worklog Time Spent: 10m 
      Work Description: TheNeuralBit commented on a change in pull request 
#15002:
URL: https://github.com/apache/beam/pull/15002#discussion_r650241149



##########
File path: sdks/python/apache_beam/dataframe/frames_test.py
##########
@@ -1607,6 +1607,30 @@ def test_sample_with_missing_weights(self):
     self.assertEqual(series_result.name, "GDP")
     self.assertEqual(set(series_result.index), set(["Nauru", "Iceland"]))
 
+  def test_sample_with_weights_distribution(self):
+    num_other_elements = 100
+    num_runs = 20
+
+    def sample_many_times(s, weights):
+      all = None
+      for _ in range(num_runs):
+        sampled = s.sample(weights=weights)
+        if all is None:
+          all = sampled
+        else:
+          all = all.append(sampled)
+      return all.sum()
+
+    result = self._run_test(
+        sample_many_times,
+        # The first element is 1, the rest are all 0.  This means that when
+        # we sum all the sampled elements (above), the result should be the
+        # number of times the first element was sampled.
+        pd.Series([1] + [0] * num_other_elements),
+        # Pick the first element about 20% of the time.
+        pd.Series([0.2] + [.8 / num_other_elements] * num_other_elements))
+
+    self.assertTrue(0 < result < num_runs / 2, result)

Review comment:
       Discussed this some offline, I think the probability of flake here is 
too high. result should follow a binomial distribution with n=20, p=0.2
   
   Then P(result=0) alone is 0.8**20 ~= 1e-2. I don't think this approach can 
totally eliminate the possibility of a flake, but we should probably get it at 
least below 1e-3. I'm not sure if we can tweak the constants enough to make 
that work while still getting a signal from this test, given that we also don't 
want to make `n` too high to keep from slowing it down.

##########
File path: sdks/python/apache_beam/dataframe/frames_test.py
##########
@@ -1607,6 +1608,29 @@ def test_sample_with_missing_weights(self):
     self.assertEqual(series_result.name, "GDP")
     self.assertEqual(set(series_result.index), set(["Nauru", "Iceland"]))
 
+  def test_sample_with_weights_distribution(self):
+    target_prob = 0.25
+    num_samples = 100
+    num_targets = 200
+    num_other_elements = 10000
+
+    target_weight = target_prob / num_targets
+    other_weight = (1 - target_prob) / num_other_elements
+    self.assertTrue(target_weight > other_weight * 10, "weights too close")
+
+    result = self._run_test(
+        lambda s,
+        weights: s.sample(n=num_samples, weights=weights).sum(),
+        # The first elements are 1, the rest are all 0.  This means that when
+        # we sum all the sampled elements (above), the result should be the
+        # number of times the first elements (aka targets) were sampled.
+        pd.Series([1] * num_targets + [0] * num_other_elements),
+        pd.Series([target_weight] * num_targets +
+                  [other_weight] * num_other_elements))
+
+    expected = num_samples * target_prob
+    self.assertTrue(expected / 3 < result < expected * 2, (expected, result))

Review comment:
       ```suggestion
       # Note the probabilistic nature of this test means that it will flake 
for ~1 in every 100,000 runs
       expected = num_samples * target_prob
       self.assertTrue(expected / 3 < result < expected * 2, (expected, result))
   ```
   
   Let's add a note about the (low) flake probability here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 610301)
    Time Spent: 119h  (was: 118h 50m)

> Implement all pandas operations (or raise WontImplementError)
> -------------------------------------------------------------
>
>                 Key: BEAM-9547
>                 URL: https://issues.apache.org/jira/browse/BEAM-9547
>             Project: Beam
>          Issue Type: Improvement
>          Components: sdk-py-core
>            Reporter: Brian Hulette
>            Assignee: Robert Bradshaw
>            Priority: P2
>              Labels: dataframe-api
>          Time Spent: 119h
>  Remaining Estimate: 0h
>
> We should have an implementation for every DataFrame, Series, and GroupBy 
> method. Everything that's not possible to implement should get a default 
> implementation that raises WontImplementError
> SeeĀ https://github.com/apache/beam/pull/10757#discussion_r389132292
> Progress at the individual operation level is tracked in a 
> [spreadsheet|https://docs.google.com/spreadsheets/d/1hHAaJ0n0k2tw465ORs5tfdy4Lg0DnGWIQ53cLjAhel0/edit],
>  consider requesting edit access if you'd like to help out.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to