Github user vectorijk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8795#discussion_r44857214
  
    --- Diff: docs/configuration.md ---
    @@ -330,13 +330,13 @@ Apart from these, the following properties are also 
available, and may be useful
       <td><code>spark.python.profile</code></td>
       <td>false</td>
       <td>
    -    Enable profiling in Python worker, the profile result will show up by 
`sc.show_profiles()`,
    +    Enable profiling in Python worker, the profile result will show up by 
<code>sc.show_profiles()<code>,
         or it will be displayed before the driver exiting. It also can be 
dumped into disk by
    -    `sc.dump_profiles(path)`. If some of the profile results had been 
displayed manually,
    +    <code>sc.dump_profiles(path)<code>. If some of the profile results had 
been displayed manually,
         they will not be displayed automatically before driver exiting.
     
    -    By default the `pyspark.profiler.BasicProfiler` will be used, but this 
can be overridden by
    -    passing a profiler class in as a parameter to the `SparkContext` 
constructor.
    +    By default the <code>pyspark.profiler.BasicProfiler<code> will be 
used, but this can be overridden by
    +    passing a profiler class in as a parameter to the 
<code>SparkContext<code> constructor.
    --- End diff --
    
    ditto


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to