Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/21370#discussion_r192446542
--- Diff: docs/configuration.md ---
@@ -456,6 +456,33 @@ Apart from these, the following properties are also
available, and may be useful
from JVM to Python worker for every task.
</td>
</tr>
+<tr>
+ <td><code>spark.sql.repl.eagerEval.enabled</code></td>
+ <td>false</td>
+ <td>
+ Enable eager evaluation or not. If true and REPL you are using
supports eager evaluation,
+ dataframe will be ran automatically. HTML table will feedback the
queries user have defined if
+ <code>_repl_html_</code> called by notebooks like Jupyter, otherwise
for plain Python REPL, output
+ will be shown like <code>dataframe.show()</code>
+ (see <a
href="https://issues.apache.org/jira/browse/SPARK-24215">SPARK-24215</a> for
more details).
+ </td>
+</tr>
+<tr>
+ <td><code>spark.sql.repl.eagerEval.maxNumRows</code></td>
+ <td>20</td>
+ <td>
+ Default number of rows in eager evaluation output HTML table generated
by <code>_repr_html_</code> or plain text,
+ this only take effect when
<code>spark.sql.repl.eagerEval.enabled</code> set to true.
--- End diff --
`set to` -> `is set to`
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]