[ https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16014845#comment-16014845 ]
James Taylor commented on PHOENIX-3823: --------------------------------------- Thanks for the patch, [~sukuna...@gmail.com]. It's looking good. Here's some feedback: * In UpsertCompiler, let's remove the while/catch/retry loop as you have it at a higher level now instead: {code} } catch (MetaDataEntityNotFoundException e) { // Catch column/column family not found exception, as our meta data may // be out of sync. Update the cache once and retry if we were out of sync. // Otherwise throw, as we'll just get the same error next time. if (retryOnce) { retryOnce = false; if (new MetaDataClient(connection).updateCache(schemaName, tableName).wasUpdated()) { continue; } } throw e; } break; {code} * Instead of checking for equality of the upsert and select TableNode (which won't be reliable when dynamic columns are used and won't detect the case of a joined table matching the upsert table) as you're doing below, we should instead pass in the table being upserted into (i.e. {{table}}) as an extra argument to the FromResolver.getResolverForQuery(). Then in FromCompiler.createTableRef(), always hit the server if the {{table}} schema and table name equal the table and schema name being resolved. {code} + boolean alwaysHitServer = false; + //Ignore cache when we do Upsert Select on same table + if(tableNode!=null && tableNode.equals(select.getFrom())){ + alwaysHitServer = true; + } + ColumnResolver selectResolver = FromCompiler.getResolverForQuery(select, connection, alwaysHitServer); {code} * The same issue exists in DeleteCompiler (though it's a bit more of an edge case - a sub query in a DELETE referring to the same table as being deleted from). Based on the above, you'd just need to pass in {{table}} to the FromResolver.getResolverForQuery() here: {code} if (transformedSelect != select) { resolverToBe = FromCompiler.getResolverForQuery(transformedSelect, connection); select = StatementNormalizer.normalize(transformedSelect, resolverToBe); } {code} > Force cache update on MetaDataEntityNotFoundException > ------------------------------------------------------ > > Key: PHOENIX-3823 > URL: https://issues.apache.org/jira/browse/PHOENIX-3823 > Project: Phoenix > Issue Type: Sub-task > Affects Versions: 4.10.0 > Reporter: James Taylor > Assignee: Maddineni Sukumar > Fix For: 4.11.0 > > Attachments: PHOENIX-3823.patch, PHOENIX-3823.v2.patch, > PHOENIX-3823.v3.patch, PHOENIX-3823.v4.patch, PHOENIX-3823.v5.patch > > > When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period > of time which may cause the schema being used to become stale. If another > client adds a column or a new table or view, other clients won't see it. As a > result, the client will get a MetaDataEntityNotFoundException. Instead of > bubbling this up, we should retry after forcing a cache update on the tables > involved in the query. > The above works well for references to entities that don't yet exist. > However, we cannot detect when some entities are referred to which no longer > exists until the cache expires. An exception is if a physical table is > dropped which would be detected immediately, however we would allow queries > and updates to columns which have been dropped until the cache entry expires > (which seems like a reasonable tradeoff IMHO. In addition, we won't start > using indexes on tables until the cache expires. -- This message was sent by Atlassian JIRA (v6.3.15#6346)