It might take some time to understand the echo system. I'm not sure about
what kind of environment you are having (like #cores, Memory etc.), To
start with, you can basically use a jdbc connector or dump your data as csv
and load it into Spark and query it. You get the advantage of caching if
you have more memory, also if you have enough cores 40000 records are
nothing.

Thanks
Best Regards

On Tue, Jul 14, 2015 at 3:09 PM, vinod kumar <vinodsachin...@gmail.com>
wrote:

> Hi Akhil
>
> Is my choice to switch to spark is good? because I don't have enough
> information regards limitation and working environment of spark.
> I tried spark SQL but it seems it returns data slower than compared to
> MsSQL.( I have tested with data which has 40000 records)
>
>
>
> On Tue, Jul 14, 2015 at 3:50 AM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
>
>> This is where you can get started
>> https://spark.apache.org/docs/latest/sql-programming-guide.html
>>
>> Thanks
>> Best Regards
>>
>> On Mon, Jul 13, 2015 at 3:54 PM, vinod kumar <vinodsachin...@gmail.com>
>> wrote:
>>
>>>
>>> Hi Everyone,
>>>
>>> I am developing application which handles bulk of data around
>>> millions(This may vary as per user's requirement) records.As of now I am
>>> using MsSqlServer as back-end and it works fine  but when I perform some
>>> operation on large data I am getting overflow exceptions.I heard about
>>> spark that it was fastest computation engine Than SQL(Correct me if I am
>>> worng).so i thought to switch my application to spark.Is my decision is
>>> right?
>>> My User Enviroment is
>>> #.Window 8
>>> #.Data in millions.
>>> #.Need to perform filtering and Sorting operations with aggregartions
>>> frequently.(for analystics)
>>>
>>> Thanks in-advance,
>>>
>>> Vinod
>>>
>>
>>
>

Reply via email to