Github user Stibbons commented on the issue:

    https://github.com/apache/spark/pull/14863
  
    I agree. I would prefer if Spark examples also "promotes" the good practice 
of Python, ie, replacing 'map' and 'filter' by list or map comprehension 
('reduce' has no equivalent on comprehension), even though 'map'/'filter' 
syntax might be closer to their equivalent on the TDDs, they are not the same. 
I am not sure if there is a consensus over this point on the "data science" 
community, but most of the Pythonists now happily promotes comprehension over 
map/filter. Most of the time it is faster, especially when there is a 
conversion to list after the map. 
    'map' may be faster than comprehension when a lambda is not used, is lazy 
on Python 3 (one can use [generator 
comprehension](http://stackoverflow.com/questions/364802/generator-comprehension#answer-364818)
 on Python 2 or 3 to have the same result, thus should be aware of when to use 
it or not).
    
    Long story short: if Spark community agree, I can look for these 
'map'/'filter' in the examples and replace them with comprehension.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to