[ 
https://issues.apache.org/jira/browse/PHOENIX-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14201673#comment-14201673
 ] 

James Taylor commented on PHOENIX-1417:
---------------------------------------

But there could be a bug causing the stats to be updated using the latest time 
which would manifest itself here. If that's the case, then we need to figure 
out where the stats are being written with the latest timestamp instead of the 
max timestamp of the table rows traversed. The writing of the stats happens in 
StatisticsWriter.commitStats()

> Table Timestamp wrongly updated to latest time causing table deletion fail
> --------------------------------------------------------------------------
>
>                 Key: PHOENIX-1417
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1417
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 5.0.0, 4.2, 3.2
>            Reporter: Jeffrey Zhong
>            Priority: Critical
>         Attachments: fix.patch
>
>
> When we run QueryIT test against a live cluster, it fails with following 
> exception:
> {noformat}
> org.apache.phoenix.schema.TableAlreadyExistsException: ERROR 1013 (42M04): 
> Table already exists. tableName=ATABLE_IDX
>         at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1536)
>         at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:980)
>         at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:260)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
>         at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
>         at 
> org.apache.phoenix.end2end.BaseQueryIT.initTable(BaseQueryIT.java:101)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> {noformat}
> I added some trace found the root cause is that 
> MetaDataEndpointImpl#getTable() has following code where we reset a table's 
> timestamp using stats.getTimestamp(). 
> {code}
>                  statsHTable = ServerUtil.getHTableForCoprocessorScan(env, 
> PhoenixDatabaseMetaData.SYSTEM_STATS_NAME);
>                  stats = StatisticsUtil.readStatistics(statsHTable, 
> physicalTableName.getBytes(), clientTimeStamp);
>                  timeStamp = Math.max(timeStamp, stats.getTimestamp());
> {code}
> Since we always use LATEST_TIMESTAMP as client time stamp to build table as 
> following code, it causes a table timestamp bump and a client using old SCN 
> won't able to delete the table created with old SCN.
> {code}
> table = buildTable(key, cacheKey, region, HConstants.LATEST_TIMESTAMP)
> {code}
> In summary, I don't think we should use stats.getTimestamp to update table 
> timestamp because stats is not relating to a table's "version" data.
> [~jamestaylor] I think it's a critical issue for people using client time 
> stamp. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to