Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19401#discussion_r142006251
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -238,9 +238,15 @@ class Dataset[T] private[sql](
private[sql] def showString(
_numRows: Int, truncate: Int = 20, vertical: Boolean = false):
String = {
val numRows = _numRows.max(0)
- val takeResult = toDF().take(numRows + 1)
- val hasMoreData = takeResult.length > numRows
- val data = takeResult.take(numRows)
+
+ val (data, hasMoreData) = if (numRows < Int.MaxValue) {
+ val takeResult = toDF().take(numRows + 1)
+ (takeResult.take(numRows), takeResult.length > numRows)
+ } else {
+ val takeResult = toDF().take(numRows)
+ val numTotalRows = toDF().count()
--- End diff --
This still calls count(). I think it's just not worth it for a purely
cosmetic difference, to print ("only showing up to 2 billion entries") in the
special case that you've collected, and tried to print, 2 billion values. It
probably will quite fail anyway. So just keep this simple
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]