Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2351#discussion_r18006502
--- Diff: docs/configuration.md ---
@@ -207,6 +207,25 @@ Apart from these, the following properties are also
available, and may be useful
</td>
</tr>
<tr>
+ <td><code>spark.python.profile</code></td>
+ <td>false</td>
+ <td>
+ Enable profiling in Python worker, the profile result will show up by
`sc.show_profiles()`,
+ or it will be showed up before the driver exiting. It also can be
dumped into disk by
+ `sc.dump_profiles(path)`. If some of the profile results had been
showed up maually,
+ they will not be showed up automatically before driver exiting.
+ </td>
+</tr>
+<tr>
+ <td><code>spark.python.profile.dump</code></td>
+ <td>(none)</td>
+ <td>
+ The directory which is used to dump the profile result before driver
exiting.
+ The results will be dumped as separated file for each RDD. They can be
loaded
+ by ptats.Stats(). If this is specified, the profile result will not be
showed up
--- End diff --
Instead of "showed up", how about "displayed"?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]