Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19401#discussion_r142035876
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -237,7 +237,7 @@ class Dataset[T] private[sql](
*/
private[sql] def showString(
_numRows: Int, truncate: Int = 20, vertical: Boolean = false):
String = {
- val numRows = _numRows.max(0)
+ val numRows = _numRows.max(0).min(Int.MaxValue - 1)
--- End diff --
Spark SQL does not work when the number of rows is close to `Int.MaxValue`.
The driver will be OOM before finishing the command. Thus, I do not think we
can hit this extreme case.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]