Looking at the results of explain, I can see a CollectLimit step. Does that
work the same way as a regular .collect() ? (where all records are sent to
the driver?)
spark.sql("select * from db.table limit 1000000").explain(false)
== Physical Plan ==
CollectLimit 1000000
+- FileScan parquet ... 806 more fields] Batched: false, Format: Parquet,
Location: CatalogFileIndex[...], PartitionCount: 3, PartitionFilters: [],
PushedFilters: [], ReadSchema:.....
db: Unit = ()
The number of partitions is 1 so that makes sense.
spark.sql("select * from db.table limit 1000000").rdd.partitions.size = 1
As a follow up , I tried to repartition the resultant dataframe and while I
can't see the CollectLimit step anymore , It did not make any difference in
the job. I still saw a big task at the end that ends up failing.
spark.sql("select * from db.table limit
1000000").repartition(1000).explain(false)
Exchange RoundRobinPartitioning(1000)
+- GlobalLimit 1000000
+- Exchange SinglePartition
+- LocalLimit 1000000 -> Is this a collect?
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
---------------------------------------------------------------------
To unsubscribe e-mail: [email protected]