Thanks, Will look into this.
Best regards,
Ravion
-- Forwarded message --
From: "Muthu Jayakumar" <bablo...@gmail.com>
Date: Jan 20, 2017 10:56 AM
Subject: Re: Dataframe caching
To: "☼ R Nair (रविशंकर नायर)" <ravishankar.n...@gmail.com>
C
I guess, this may help in your case?
https://spark.apache.org/docs/latest/sql-programming-guide.html#global-temporary-view
Thanks,
Muthu
On Fri, Jan 20, 2017 at 6:27 AM, ☼ R Nair (रविशंकर नायर) <
ravishankar.n...@gmail.com> wrote:
> Dear all,
>
> Here is a requirement I am thinking of
Dear all,
Here is a requirement I am thinking of implementing in Spark core. Please
let me know if this is possible, and kindly provide your thoughts.
A user executes a query to fetch 1 million records from , let's say a
database. We let the user store this as a dataframe, partitioned across