Github user LantaoJin commented on a diff in the pull request:
https://github.com/apache/spark/pull/20803#discussion_r175975380
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -166,20 +168,28 @@ private[sql] object Dataset {
class Dataset[T] private[sql](
@transient val sparkSession: SparkSession,
@DeveloperApi @InterfaceStability.Unstable @transient val
queryExecution: QueryExecution,
- encoder: Encoder[T])
+ encoder: Encoder[T],
+ val sqlText: String = "")
--- End diff --
Your speculation is almost right. First call val df = spark.sql(), then
separates the sql text with pattern matching to there type: count, limit and
other. if count, then invoke the df.showString(2,20). if limit, just invoke
df.limit(1).foreach, the last type other will do noting.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]