[
https://issues.apache.org/jira/browse/PHOENIX-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15373626#comment-15373626
]
Lars Hofhansl commented on PHOENIX-2885:
----------------------------------------
A {{SELECT Y FROM T}} failed for me when {{Y}} was dropped from another client
(see PHOENIX-3064). It's quite possible that there are many scenarios where
this would not be the case (such as in {{SELECT * FROM T}} where {{Y}} would
simply not be queried).
Thinking on this more... I think we can just default UPDATE_CACHE_FREQUENCY to
a reasonably small value (something between 10000 and 60000), I think. As long
as we document this, it's fine, and it would be enough to avoid the hammering
of the CATALOG region. That might be a fine change to make in 4.9 with a
release note.
Looks like there's no global default (it's hard-coded to 0), maybe a first step
is to add that, so that one can override that without changing all tables.
> Refresh client side cache before throwing not found exception
> -------------------------------------------------------------
>
> Key: PHOENIX-2885
> URL: https://issues.apache.org/jira/browse/PHOENIX-2885
> Project: Phoenix
> Issue Type: Bug
> Reporter: James Taylor
> Fix For: 4.9.0
>
>
> With the increased usage of the UPDATE_CACHE_FREQUENCY property to reduce
> RPCs, we increase the chance that a separate client attempts to access a
> column that doesn't exist on the cached entity. Instead of throwing in this
> case, we can update the client-side cache. This works well for references to
> entities (columns, tables) that don't yet exist. For entities that *do*
> exist, we won't detect that they've been deleted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)