Can you println the .queryExecution of the SchemaRDD?

On Tue, Oct 28, 2014 at 7:43 PM, Corey Nolet <cjno...@gmail.com> wrote:

> So this appears to work just fine:
>
> hctx.sql("SELECT p.name, p.age  FROM people p LATERAL VIEW
> explode(locations) l AS location JOIN location5 lo ON l.number =
> lo.streetNumber WHERE location.number = '2300'").collect()
>
> But as soon as I try to join with another set based on a property from the
> exploded locations set, I get invalid attribute exceptions:
>
> hctx.sql("SELECT p.name, p.age, ln.locationName  FROM people as p LATERAL
> VIEW explode(locations) l AS location JOIN locationNames ln ON
> location.number = ln.streetNumber WHERE location.number = '2300'").collect()
>
>
> On Tue, Oct 28, 2014 at 10:19 PM, Michael Armbrust <mich...@databricks.com
> > wrote:
>
>>
>>
>> On Tue, Oct 28, 2014 at 6:56 PM, Corey Nolet <cjno...@gmail.com> wrote:
>>
>>> Am I able to do a join on an exploded field?
>>>
>>> Like if I have another object:
>>>
>>> { "streetNumber":"2300", "locationName":"The Big Building"} and I want
>>> to join with the previous json by the locations[].number field- is that
>>> possible?
>>>
>>
>> I'm not sure I fully understand the question, but once its exploded its a
>> normal tuple and you can do any operations on it.  The explode is just
>> producing a new row for each element in the array.
>>
>> Awesome, this is what I was looking for. So it's possible to use hive
>>>> dialect in a regular sql context? This is what was confusing to me- the
>>>> docs kind of allude to it but don't directly point it out.
>>>>
>>>
>> No, you need a HiveContext as we use the actual hive parser (SQLContext
>> only exists as a separate entity so that people who don't want Hive's
>> dependencies in their app can still use a limited subset of Spark SQL).
>>
>
>

Reply via email to