Github user justinuang commented on the pull request:
https://github.com/apache/spark/pull/8662#issuecomment-140223207
Looks like your intuition was right. The second time it's slightly faster,
so I ran the loop twice and took the 2nd's numbers
Here are the updated numbers
With fix
Number of udfs: 0 - 0.0953350067139
Number of udfs: 1 - 1.73201990128
Number of udfs: 2 - 3.41883206367
Number of udfs: 3 - 5.24572992325
Number of udfs: 4 - 6.83000802994
Number of udfs: 5 - 8.59465384483
Without fix
Number of udfs: 0 - 0.0891687870026
Number of udfs: 1 - 1.53674888611
Number of udfs: 2 - 4.44895505905
Number of udfs: 3 - 10.0561971664
Number of udfs: 4 - 21.5314221382
Number of udfs: 5 - 43.887141943
It does look like there's a tiny performance drop for 1 udf. My guess is
that it's slightly slower because the initial approach was slightly cheating
with CPU time. It had 3 threads that could do computation at once. However,
this is breaking with the RDD abstraction that each partition should only get
one thread each to do CPU work.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]