[ 
https://issues.apache.org/jira/browse/PHOENIX-2310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14944510#comment-14944510
 ] 

Jan Fernando commented on PHOENIX-2310:
---------------------------------------

Here's a stack trace of when this bug occurs for reference:

ava.lang.Exception: java.lang.RuntimeException: 
org.apache.phoenix.exception.BatchUpdateExecution: ERROR 1106 (XCL06): 
Exception while executing batch.
    at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.RuntimeException: 
org.apache.phoenix.exception.BatchUpdateExecution: ERROR 1106 (XCL06): 
Exception while executing batch.
    at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.close(PhoenixRecordWriter.java:62)
    at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.close(PigOutputFormat.java:153)
    at 
org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at 
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.exception.BatchUpdateExecution: ERROR 1106 
(XCL06): Exception while executing batch.
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeBatch(PhoenixStatement.java:1264)
    at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.close(PhoenixRecordWriter.java:58)
    ... 10 more
Caused by: org.apache.phoenix.schema.ColumnFamilyNotFoundException: ERROR 1001 
(42I01): Undefined column family. familyName=org.null
    at org.apache.phoenix.schema.PTableImpl.getColumnFamily(PTableImpl.java:787)
    at 
org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:361)
    at 
org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:344)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:546)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:534)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:314)
    at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:307)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:305)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:235)
    at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
    at 
org.apache.phoenix.jdbc.PhoenixStatement.executeBatch(PhoenixStatement.java:1258)
    ... 11 more

> PhoenixConfigurationUtil.getUpsertColumnMetadataList() in Phoenix Mapreduce 
> integration generates incorrect upsert statement for view immediately after 
> issue view DDL
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-2310
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2310
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.5.2
>            Reporter: Jan Fernando
>            Assignee: Jan Fernando
>         Attachments: PHOENIX-2310-v1.patch
>
>
> We ran into what I believe is a corner case that was causing a M/R job using 
> the Phoenix / Pig integration to fail by virtue of a incorrect UPSERT 
> statement being generated. 
> The issue was intermittent. The bug is that the UPSERT statement generated by 
> PhoenixConfigurationUtil.getUpsertColumnMetadataList() when invoked from 
> PhoenixRecordWriter would, instead of the cf.column_name, would contain for 
> certain columns the result class name + hashcode as generated by Java's 
> Object.toString(). Since this was not a valid column name the Pig Script 
> would blow-up.
> This only occurs if we are attempting to insert data into a Phoenix View and 
> the DDL for the Phoenix View was issued recently such that the MetadataClient 
> cache was for this view was populated by MetaDataClient.createTable(). 
> What is occurring is in this case we wrap the PColumn in a Delegate at lines 
> 1898 and 1909. The DelegateColumn class used to wrap PColumn doesn't 
> implement toString() and so the default Object toString() is used. If you 
> restart the JVM and force Phoenix to re-read the metadata from SYSTEM.CATALOG 
> this doesn't occur as in this case we don't wrap the PColumn instance.
> I have a test to repro and a possible patch I'll attach shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to