[ 
https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998867#comment-15998867
 ] 

James Taylor commented on PHOENIX-3823:
---------------------------------------

One thought I had (which I meant to share with you earlier) is that we should 
do the same retry logic for create table call in PhoenixStatement. We can 
perhaps put your retry logic in this one place to cover all cases:
{code}
    protected boolean execute(final CompilableStatement stmt) throws 
SQLException {
{code}

Sounds like there may be an additional problem here too, in particular with 
view creation. There's some special logic in CreateTableStatement that freezes 
the timestamp at the one on the tableRef+1 that likely needs to be changed here:
{code}
                if (viewTypeToBe != ViewType.MAPPED) {
                    Long scn = connection.getSCN();
                    connectionToBe = (scn != null || 
tableRef.getTable().isTransactional()) ? connection :
                        // If we haved no SCN on our connection and the base 
table is not transactional, freeze the SCN at when
                        // the base table was resolved to prevent any race 
condition on
                        // the error checking we do for the base table. The 
only potential
                        // issue is if the base table lives on a different 
region server
                        // than the new table will, then we're relying here on 
the system
                        // clocks being in sync.
                        new PhoenixConnection(
                            // When the new table is created, we still want to 
cache it
                            // on our connection.
                            new 
DelegateConnectionQueryServices(connection.getQueryServices()) {
                                @Override
                                public void addTable(PTable table, long 
resolvedTime) throws SQLException {
                                    connection.addTable(table, resolvedTime);
                                }
                            },
                            connection, tableRef.getTimeStamp()+1);
{code}


> Force cache update on MetaDataEntityNotFoundException 
> ------------------------------------------------------
>
>                 Key: PHOENIX-3823
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3823
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: James Taylor
>            Assignee: Maddineni Sukumar
>
> When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period 
> of time which may cause the schema being used to become stale. If another 
> client adds a column or a new table or view, other clients won't see it. As a 
> result, the client will get a MetaDataEntityNotFoundException. Instead of 
> bubbling this up, we should retry after forcing a cache update on the tables 
> involved in the query.
> The above works well for references to entities that don't yet exist. 
> However, we cannot detect when some entities are referred to which no longer 
> exists until the cache expires. An exception is if a physical table is 
> dropped which would be detected immediately, however we would allow queries 
> and updates to columns which have been dropped until the cache entry expires 
> (which seems like a reasonable tradeoff IMHO. In addition, we won't start 
> using indexes on tables until the cache expires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to