Hi James,
At first I thought it was the dataframe vs rdd implementations but looking
closer my bet is the way spark connects to phoenix. When reading via
SqlContext I pass in

"url" -> "jdbc:phoenix:zkHost1, zkHost2, zkHost3:zkPort;TenantId=123456789"
and it connects as the tenant.

However SqlContext does not have a write/save function.

When I try to save by other means I am required to pass in a value for
"zkUrl" (not "url"). "zkUrl" cannot have the "jdbc:phoenix:" portion
attached (because it attaches zkPort to the end of jdbc:phoenix and errors
out). As such I cannot connect as the tenant.

When connecting as the tenant via squirrel client I use the same "url"
string above, and it works.

So to me it appears to be an issue of how to connect to phoenix as the
tenant via spark/phoenix-spark integration. I have not found a clear cut
way to do so.


Thanks,
-Nico


On Fri, Oct 7, 2016 at 9:03 AM, James Taylor <[email protected]> wrote:

> Hi Nico,
> You mentioned offline that it seems to be working for data frames, but not
> RDDs. Can you elaborate on that? Have you confirmed whether the TenantId
> connection property is being propagated down to the Phoenix connection
> opened for the Spark integration?
> Thanks,
> James
>
> On Thu, Oct 6, 2016 at 8:36 PM, Nico Pappagianis <
> [email protected]> wrote:
>
> > Does phoenix-spark integration support multitenancy? I'm having a hard
> time
> > getting it working on my tenant-specific view.
> >
> > Thanks
> >
>

Reply via email to