Github user sethah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/12660#discussion_r61001243
  
    --- Diff: python/pyspark/ml/tests.py ---
    @@ -929,6 +932,50 @@ def test_apply_binary_term_freqs(self):
                                        ": expected " + str(expected[i]) + ", 
got " + str(features[i]))
     
     
    +class ALSTest(PySparkTestCase):
    +
    +    def test_storage_levels(self):
    --- End diff --
    
    I'm less familiar with the RDD storage API, but this also seems like it 
might be redundant. We already know that the setting the param takes effect on 
the Scala side. We don't need to check that Python successfully passes the 
param to the JVM since it is tested elsewhere, and I assume that the storage 
level transfer back to Python is also tested elsewhere. Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to