Github user xuanyuanking commented on a diff in the pull request:
https://github.com/apache/spark/pull/21370#discussion_r194794700
--- Diff: docs/configuration.md ---
@@ -456,6 +456,33 @@ Apart from these, the following properties are also
available, and may be useful
from JVM to Python worker for every task.
</td>
</tr>
+<tr>
+ <td><code>spark.sql.repl.eagerEval.enabled</code></td>
+ <td>false</td>
+ <td>
+ Enable eager evaluation or not. If true and the REPL you are using
supports eager evaluation,
+ Dataset will be ran automatically. The HTML table which generated by
<code>_repl_html_</code>
+ called by notebooks like Jupyter will feedback the queries user have
defined. For plain Python
+ REPL, the output will be shown like <code>dataframe.show()</code>
+ (see <a
href="https://issues.apache.org/jira/browse/SPARK-24215">SPARK-24215</a> for
more details).
+ </td>
+</tr>
+<tr>
+ <td><code>spark.sql.repl.eagerEval.maxNumRows</code></td>
+ <td>20</td>
+ <td>
+ Default number of rows in eager evaluation output HTML table generated
by <code>_repr_html_</code> or plain text,
+ this only take effect when
<code>spark.sql.repl.eagerEval.enabled</code> is set to true.
--- End diff --
Got it, thanks.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]