GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/2351
[SPARK-3478] [PySpark] Profile the Python tasks
This patch add profiling support for PySpark, it will show the profiling
results
before the driver exits, here is one example:
```
============================================================
Profile of RDD<id=3>
============================================================
5146507 function calls (5146487 primitive calls) in 71.094 seconds
Ordered by: internal time, cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
5144576 68.331 0.000 68.331 0.000 statcounter.py:44(merge)
20 2.735 0.137 71.071 3.554 statcounter.py:33(__init__)
20 0.017 0.001 0.017 0.001 {cPickle.dumps}
1024 0.003 0.000 0.003 0.000 t.py:16(<lambda>)
20 0.001 0.000 0.001 0.000 {reduce}
21 0.001 0.000 0.001 0.000 {cPickle.loads}
20 0.001 0.000 0.001 0.000 copy_reg.py:95(_slotnames)
41 0.001 0.000 0.001 0.000 serializers.py:461(read_int)
40 0.001 0.000 0.002 0.000 serializers.py:179(_batched)
62 0.000 0.000 0.000 0.000 {method 'read' of 'file'
objects}
20 0.000 0.000 71.072 3.554 rdd.py:863(<lambda>)
20 0.000 0.000 0.001 0.000
serializers.py:198(load_stream)
40/20 0.000 0.000 71.072 3.554 rdd.py:2093(pipeline_func)
41 0.000 0.000 0.002 0.000
serializers.py:130(load_stream)
40 0.000 0.000 71.072 1.777 rdd.py:304(func)
20 0.000 0.000 71.094 3.555 worker.py:82(process)
40 0.000 0.000 0.001 0.000 rdd.py:741(func)
20 0.000 0.000 0.018 0.001
serializers.py:137(_write_with_length)
20 0.000 0.000 0.020 0.001
serializers.py:195(dump_stream)
20 0.000 0.000 0.000 0.000
serializers.py:201(_load_stream_without_unbatching)
20 0.000 0.000 0.000 0.000 {hasattr}
41 0.000 0.000 0.002 0.000
serializers.py:145(_read_with_length)
40 0.000 0.000 0.000 0.000 {built-in method
from_iterable}
20 0.000 0.000 0.000 0.000 serializers.py:468(write_int)
20 0.000 0.000 0.018 0.001 serializers.py:355(dumps)
20 0.000 0.000 0.020 0.001
serializers.py:126(dump_stream)
20 0.000 0.000 0.000 0.000 {method 'get' of 'dictproxy'
objects}
20 0.000 0.000 0.000 0.000 rdd.py:291(func)
40 0.000 0.000 0.000 0.000 {method 'write' of 'file'
objects}
20 0.000 0.000 0.000 0.000 {_struct.pack}
21 0.000 0.000 0.000 0.000 {_struct.unpack}
20 0.000 0.000 0.000 0.000 {iter}
20 0.000 0.000 0.000 0.000 {method 'append' of 'list'
objects}
20 0.000 0.000 0.000 0.000 {len}
20 0.000 0.000 0.000 0.000 {next}
20 0.000 0.000 0.000 0.000 {method 'disable' of
'_lsprof.Profiler' objects}
```
The profiling is disabled by default, can be enabled by
"spark.python.profile=true".
Also, users can dump the results into disks for future analysis, by
"spark.python.profile.dump=path_to_dump"
PS: will update the docs later.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/davies/spark profiler
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/2351.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #2351
----
commit 4b20494ce4e5e287a09fee5df5e0684711258627
Author: Davies Liu <[email protected]>
Date: 2014-09-11T00:51:28Z
add profile for python
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]