[ 
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301540#comment-15301540
 ] 

James Taylor commented on PHOENIX-2940:
---------------------------------------

To expand on what Josh said, a simple solution since stats are asynchronously 
generated at an infrequent interval:
- do not cache stats on the server side at all by removing this block of code 
from MetaDataEndPointImpl.getTable():
{code}
        if (tenantId == null) {
            HTableInterface statsHTable = null;
            try {
                statsHTable = ServerUtil.getHTableForCoprocessorScan(env,
                        
SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME_BYTES,
 env.getConfiguration())
                                .getName());
                stats = StatisticsUtil.readStatistics(statsHTable, 
physicalTableName.getBytes(), clientTimeStamp);
                timeStamp = Math.max(timeStamp, stats.getTimestamp());
            } catch (org.apache.hadoop.hbase.TableNotFoundException e) {
                
logger.warn(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME_BYTES,
                        env.getConfiguration()) + " not online yet?");
            } finally {
                if (statsHTable != null) statsHTable.close();
            }
        }
{code}
- Introduce a scheduled timer in ConnectionQueryServicesImpl that queries the 
SYSTEM.STATS table through the StatisticsUtil.readStatistics() call at the 
already existing {{QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB}} config param 
frequency. And add this to a new LRU cache using 
{{com.google.common.cache.Cache}} keyed by PTable.getKey().
- In BaseResultIterators.getGuidePosts(), get the guideposts from the new cache 
instead of from the PTable. We can have the stats fault-in when not found. 
- Eventually (or maybe even now?), we can remove the stats field from the 
PTable protobuf.


> Remove STATS RPCs from rowlock
> ------------------------------
>
>                 Key: PHOENIX-2940
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2940
>             Project: Phoenix
>          Issue Type: Improvement
>         Environment: HDP 2.3 + Apache Phoenix 4.6.0
>            Reporter: Nick Dimiduk
>            Assignee: Josh Elser
>
> We have an unfortunate situation wherein we potentially execute many RPCs 
> while holding a row lock. This is problem is discussed in detail on the user 
> list thread ["Write path blocked by MetaDataEndpoint acquiring region 
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1&subj=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
>  During some situations, the 
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
>  coprocessor will attempt to refresh it's view of the schema definitions and 
> statistics. This involves [taking a 
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
>  executing a scan against the [local 
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
>  and then a scan against a [potentially 
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
>  statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps 
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in 
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with 
> total gridlock in our handler threads -- everyone queued behind the rowlock, 
> scanning and rescanning SYSTEM.STATS. Because this happens in the 
> MetaDataEndpoint, the means by which all clients refresh their knowledge of 
> schema, gridlock in that RS can effectively stop all forward progress on the 
> cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to