[
https://issues.apache.org/jira/browse/HIVE-18885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394393#comment-16394393
]
Vihang Karajgaonkar commented on HIVE-18885:
--------------------------------------------
The issue happens due to a deadlock between two concurrent transactions which
acquire the same blocking db locks and the java object lock
{{NOTIFICATION_TBL_LOCK}} in DbNotificationListener.java. The issue is more
likely to happen in systems where DbNotificationListener is configured as a
transactional listener because the db locks are not released until the top
level transaction completes. Here is an example:
1. Two transactions call alter_partitions on a List of non-overlapping
partitions (Typical scenario happening from StatsTask from multiple concurrent
queries)
2. In alter_partitions both the transactions are executing the following loop
in alter_partitions
{code}
for (Partition tmpPart : new_parts) {
Partition oldTmpPart = null;
if (olditr.hasNext()) {
oldTmpPart = olditr.next();
}
else {
throw new InvalidOperationException("failed to alterpartitions");
}
if (table == null) {
table = getMS().getTable(db_name, tbl_name);
}
if (!listeners.isEmpty()) {
MetaStoreListenerNotifier.notifyEvent(listeners,
EventType.ALTER_PARTITION,
new
AlterPartitionEvent(oldTmpPart, tmpPart, table, true, this));
}
}
{code}
3. Transaction 1 acquires the dblock on notification_sequence table in
notifyEvent method and releases the lock on {{NOTIFICATION_TBL_LOCK}} object.
The fact that notification_sequence table is a single row table makes matters
worse.
4. Transaction 2 thrift thread is scheduled and it tries to do same thing
above. But now it blocks on the dbLock *while holding the lock on
{{NOTIFICATION_TBL_LOCK}}*.
5. Transaction 1 thrift thread is scheduled and it blocks on lock
{{NOTIFICATION_TBL_LOCK}} which held by Transaction 2 above.
Eventually, DB times out one of the transaction with DB lock time-out errors
and rollbacks one of them. But in a highly concurrent workload this keeps
repeating and HMS slows down so much that it practically becomes unusable.
> Cascaded alter table + notifications = disaster
> -----------------------------------------------
>
> Key: HIVE-18885
> URL: https://issues.apache.org/jira/browse/HIVE-18885
> Project: Hive
> Issue Type: Bug
> Components: Hive, Metastore
> Affects Versions: 2.3.2
> Reporter: Alexander Kolbasov
> Assignee: Vihang Karajgaonkar
> Priority: Major
>
> You can see the problem from looking at the code, but it actually created
> severe problems for real life Hive user.
> When {{alter table}} has {{cascade}} option it does the following:
> {code:java}
> msdb.openTransaction()
> ...
> List<Partition> parts = msdb.getPartitions(dbname, name, -1);
> for (Partition part : parts) {
> List<FieldSchema> oldCols = part.getSd().getCols();
> part.getSd().setCols(newt.getSd().getCols());
> String oldPartName =
> Warehouse.makePartName(oldt.getPartitionKeys(), part.getValues());
> updatePartColumnStatsForAlterColumns(msdb, part, oldPartName,
> part.getValues(), oldCols, part);
> msdb.alterPartition(dbname, name, part.getValues(), part);
> }
> {code}
> So it walks all partitions (and this may be huge list) and does some
> non-trivial operations in one single uber-transaction.
> When DbNotificationListener is enabled, it adds an event for each partition,
> all while
> holding a row lock on NOTIFICATION_SEQUENCE table. As a result, while this is
> happening no other write DDL can proceed. This can sometimes cause DB lock
> timeouts which cause HMS level operation retries which make things even worse.
> In one particular case this pretty much made HMS unusable.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)