[
https://issues.apache.org/jira/browse/PHOENIX-2940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330736#comment-15330736
]
James Taylor commented on PHOENIX-2940:
---------------------------------------
Patch looks good, [~elserj].
Not sure I understand this new logic, as it seems like we don't need the {{if
(PTableStats.EMPTY_STATS == stats)}} block. Before, we were relying on the
PTable meta data cache to find stats, but now we're relying on this new cache
plus querying directly if not found. So clearing the table from the cache won't
have any impact on stats.
{code}
+ PTableStats stats =
connection.getQueryServices().getTableStats(Bytes.toBytes(physicalName),
getCurrentScn());
+ // Reference check -- we might not have gotten any stats. This
is what will happen if we fail to acquire stats
+ if (PTableStats.EMPTY_STATS == stats) {
+ // Possible when the table timestamp == current timestamp -
1.
+ // This would be most likely during the initial index build
of a view index
+ // where we're doing an upsert select from the tenant
specific table.
+ // TODO: would we want to always load the physical table in
updateCache in
+ // this case too, as we might not update the view with all
of it's indexes?
+ byte[] physicalSchemaName =
Bytes.toBytes(SchemaUtil.getSchemaNameFromFullName(physicalName));
+ byte[] physicalTableName =
Bytes.toBytes(SchemaUtil.getTableNameFromFullName(physicalName));
+ connection.getQueryServices().clearTableFromCache(null,
physicalSchemaName, physicalTableName, getCurrentScn());
+ stats =
connection.getQueryServices().getTableStats(Bytes.toBytes(physicalName),
getCurrentScn());
+ }
+ return stats;
+ }
+ return
connection.getQueryServices().getTableStats(table.getName().getBytes(),
getCurrentScn());
{code}
One nit - remove this comment since this is now the way stats are always read
(where as before they came over attached to the PTable):
{code}
+ /*
+ * The shared view index case is tricky, because we don't have
+ * table metadata for it, only an HBase table. We do have stats,
+ * though, so we'll query them directly here and cache them so
+ * we don't keep querying for them.
+ */
{code}
Second nit - looks like the only call to TableStatsCache.put() is for testing
through ConnectionlessQueryServerImpl which makes sense. Would be good to make
a comment in the javadoc (wow, javadoc - thanks!), that it's exposed for
testing.
Otherwise, looks great! Interested in the perf numbers and would be great if
[~ndimiduk] could take it for a spin too.
> Remove STATS RPCs from rowlock
> ------------------------------
>
> Key: PHOENIX-2940
> URL: https://issues.apache.org/jira/browse/PHOENIX-2940
> Project: Phoenix
> Issue Type: Improvement
> Environment: HDP 2.3 + Apache Phoenix 4.6.0
> Reporter: Nick Dimiduk
> Assignee: Josh Elser
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2940.001.patch, PHOENIX-2940.002.patch,
> PHOENIX-2940.003.patch
>
>
> We have an unfortunate situation wherein we potentially execute many RPCs
> while holding a row lock. This is problem is discussed in detail on the user
> list thread ["Write path blocked by MetaDataEndpoint acquiring region
> lock"|http://search-hadoop.com/m/9UY0h2qRaBt6Tnaz1&subj=Write+path+blocked+by+MetaDataEndpoint+acquiring+region+lock].
> During some situations, the
> [MetaDataEndpoint|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L492]
> coprocessor will attempt to refresh it's view of the schema definitions and
> statistics. This involves [taking a
> rowlock|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2862],
> executing a scan against the [local
> region|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L542],
> and then a scan against a [potentially
> remote|https://github.com/apache/phoenix/blob/10909ae502095bac775d98e6d92288c5cad9b9a6/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L964]
> statistics table.
> This issue is apparently exacerbated by the use of user-provided timestamps
> (in my case, the use of the ROW_TIMESTAMP feature, or perhaps as in
> PHOENIX-2607). When combined with other issues (PHOENIX-2939), we end up with
> total gridlock in our handler threads -- everyone queued behind the rowlock,
> scanning and rescanning SYSTEM.STATS. Because this happens in the
> MetaDataEndpoint, the means by which all clients refresh their knowledge of
> schema, gridlock in that RS can effectively stop all forward progress on the
> cluster.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)