For Spark SQL internal operations, probably we can just
create MapPartitionsRDD directly (like
https://github.com/apache/spark/commit/5287eec5a6948c0c6e0baaebf35f512324c0679a
).

On Fri, May 29, 2015 at 11:04 AM, Josh Rosen <rosenvi...@gmail.com> wrote:

> Hey, want to file a JIRA for this?  This will make it easier to track
> progress on this issue.  Definitely upload the profiler screenshots there,
> too, since that's helpful information.
>
> https://issues.apache.org/jira/browse/SPARK
>
>
>
> On Wed, May 27, 2015 at 11:12 AM, Nitin Goyal <nitin2go...@gmail.com>
> wrote:
>
>> Hi Ted,
>>
>> Thanks a lot for replying. First of all, moving to 1.4.0 RC2 is not easy
>> for
>> us as migration cost is big since lot has changed in Spark SQL since 1.2.
>>
>> Regarding SPARK-7233, I had already looked at it few hours back and it
>> solves the problem for concurrent queries but my problem is just for a
>> single query. I also looked at the fix's code diff and it wasn't related
>> to
>> the problem which seems to exist in Closure Cleaner code.
>>
>> Thanks
>> -Nitin
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-developers-list.1001551.n3.nabble.com/ClosureCleaner-slowing-down-Spark-SQL-queries-tp12466p12468.html
>> Sent from the Apache Spark Developers List mailing list archive at
>> Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>
>

Reply via email to