On Thu, Mar 27, 2014 at 10:22 AM, andy petrella <andy.petre...@gmail.com>wrote:

> I just mean queries sent at runtime ^^, like for any RDBMS.
> In our project we have such requirement to have a layer to play with the
> data (custom and low level service layer of a lambda arch), and something
> like this is interesting.
>
>
Ok that's what I thought! But for these runtime queries, is a macro useful
for you?


>
>
> On Thu, Mar 27, 2014 at 10:15 AM, Pascal Voitot Dev <
> pascal.voitot....@gmail.com> wrote:
>
>>
>> Le 27 mars 2014 09:47, "andy petrella" <andy.petre...@gmail.com> a écrit
>> :
>>
>> >
>> > I hijack the thread, but my2c is that this feature is also important to
>> enable ad-hoc queries which is done at runtime. It doesn't remove interests
>> for such macro for precompiled jobs of course, but it may not be the first
>> use case envisioned with this Spark SQL.
>> >
>>
>> I'm not sure to see what you call "ad- hoc queries"... Any sample?
>>
>> > Again, only my0.2c (ok I divided by 10 after writing my thoughts ^^)
>> >
>> > Andy
>> >
>> > On Thu, Mar 27, 2014 at 9:16 AM, Pascal Voitot Dev <
>> pascal.voitot....@gmail.com> wrote:
>> >>
>> >> Hi,
>> >> Quite interesting!
>> >>
>> >> Suggestion: why not go even fancier & parse SQL queries at
>> compile-time with a macro ? ;)
>> >>
>> >> Pascal
>> >>
>> >>
>> >>
>> >> On Wed, Mar 26, 2014 at 10:58 PM, Michael Armbrust <
>> mich...@databricks.com> wrote:
>> >>>
>> >>> Hey Everyone,
>> >>>
>> >>> This already went out to the dev list, but I wanted to put a pointer
>> here as well to a new feature we are pretty excited about for Spark 1.0.
>> >>>
>> >>>
>> http://databricks.com/blog/2014/03/26/Spark-SQL-manipulating-structured-data-using-Spark.html
>> >>>
>> >>> Michael
>> >>
>> >>
>> >
>>
>
>

Reply via email to