[ 
https://issues.apache.org/jira/browse/PHOENIX-3427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627257#comment-15627257
 ] 

Nico Pappagianis commented on PHOENIX-3427:
-------------------------------------------

Here is the stack trace. I'm getting a table not found error in 
PhoenixConnection.getTable:

  public PTable getTable(PTableKey key) throws TableNotFoundException {
        return metaData.getTableRef(key).getTable();
    }

The PhoenixConnection has the tenantId, but the metaData doesn't have a 
reference for the table. I'm trying to figure out how a reference in the 
metaData gets created.
    
Stack trace:
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=TEST_TENANT
        at 
org.apache.phoenix.schema.PMetaDataImpl.getTableRef(PMetaDataImpl.java:70)
        at 
org.apache.phoenix.jdbc.PhoenixConnection.getTable(PhoenixConnection.java:451)
        at 
org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:408)
        at 
org.apache.phoenix.util.PhoenixRuntime.generateColumnInfo(PhoenixRuntime.java:434)
        at 
org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getUpsertColumnMetadataList(PhoenixConfigurationUtil.java:235)
        at 
org.apache.phoenix.spark.ProductRDDFunctions$$anonfun$1.apply(ProductRDDFunctions.scala:42)
        at 
org.apache.phoenix.spark.ProductRDDFunctions$$anonfun$1.apply(ProductRDDFunctions.scala:38)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

> rdd.saveToPhoenix gives table undefined error when attempting to write to a 
> tenant-specific view (TenantId defined in configuration object and passed to 
> saveToPhoenix)
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-3427
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3427
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Nico Pappagianis
>
> Although we can read from a tenant-specific view by passing TenantId in the 
> conf object when calling sc.phoenixTableAsRDD the same does not hold for 
> rdd.saveToPhoenix. Calling saveToPhoenix with a tenant-specific view as the 
> table name gives a table undefined error, even when passing in the TenantId 
> with the conf object.
> It appears that TenantId is lost during the execution path of saveToPhoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to