Can you provide more details? Your use case does not sound you need Spark.
Your version is anyway too old. It does not make sense to develop now with 
1.2.1 . There is no "project limitation" that is able to justify this. 

> On 08 Feb 2016, at 06:48, Meetu Maltiar <meetu.malt...@gmail.com> wrote:
> 
> Hi,
> 
> I am working on an application that reads a single Hive Table and do some 
> manipulations on each row of it. Finally construct an XML.
> Hive table will be a large data set, no chance to fit it in memory. I intend 
> to use SparkSQL 1.2.1 (due to project limitations).
> Any pointers to me on handling this large data-set will be helpful (Fetch 
> Size….).
> 
> Thanks in advance.
> 
> Kind Regards,
> Meetu Maltiar
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to