Hi, Currently the TakeOrderedAndProject operator in spark sql uses RDD’s takeOrdered method. When we pass a large limit to operator, however, it will return partitionNum*limit number of records to the driver which may cause an OOM.
Are there any plans to deal with the problem in the community? Thanks. Yang -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/TakeOrderedAndProject-operator-may-causes-an-OOM-tp16208.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org