[ 
https://issues.apache.org/jira/browse/PHOENIX-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15645379#comment-15645379
 ] 

Josh Mahonin commented on PHOENIX-3427:
---------------------------------------

Thanks for the patch [~nico.pappagianis]!

Overall it looks good to me. If I can suggest an improvement, maybe also 
include reading the 'TenantId' (and/or 'tid'?) from the parameters for the 
DataFrame relation here:
https://github.com/apache/phoenix/blob/master/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DefaultSource.scala#L39-L48

That way the syntax could support something like:
{noformat}
df.write
  .format("org.apache.phoenix.spark")
  .mode("overwrite")
  .option("table", "TABLE1")
  .option("TenantID", "theTenant")
  .option("zkUrl", "localhost:2181")
  .save()
{noformat}

> rdd.saveToPhoenix gives table undefined error when attempting to write to a 
> tenant-specific view (TenantId defined in configuration object and passed to 
> saveToPhoenix)
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-3427
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3427
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Nico Pappagianis
>
> Although we can read from a tenant-specific view by passing TenantId in the 
> conf object when calling sc.phoenixTableAsRDD the same does not hold for 
> rdd.saveToPhoenix. Calling saveToPhoenix with a tenant-specific view as the 
> table name gives a table undefined error, even when passing in the TenantId 
> with the conf object.
> It appears that TenantId is lost during the execution path of saveToPhoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to