[
https://issues.apache.org/jira/browse/HIVE-24711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17276829#comment-17276829
]
LinZhongwei commented on HIVE-24711:
------------------------------------
This is the source code. Is FileSystem.closeAllForUGI(ugi) missing ?
final UserGroupInformation ugi;
try {
ugi = UserGroupInformation.getCurrentUser();
} catch (IOException e) {
throw new RuntimeException(e);
}
partFutures.add(threadPool.submit(new Callable<Partition>() {
@Override
public Partition call() throws Exception {
ugi.doAs(new PrivilegedExceptionAction<Object>() {
@Override
public Object run() throws Exception {
try {
boolean madeDir = createLocationForAddedPartition(table,
part);
if (addedPartitions.put(new PartValEqWrapper(part),
madeDir) != null) {
// Technically, for ifNotExists case, we could insert one and discard the other
// because the first one now "exists", but it seems better to report the
problem
// upstream as such a command doesn't make sense.
throw new MetaException("Duplicate partitions in the
list: " + part);
}
initializeAddedPartition(table, part, madeDir);
} catch (MetaException e) {
throw new IOException(e.getMessage(), e);
}
return null;
}
});
return part;
}
}));
}
> hive metastore memory leak
> --------------------------
>
> Key: HIVE-24711
> URL: https://issues.apache.org/jira/browse/HIVE-24711
> Project: Hive
> Issue Type: Bug
> Components: Hive, Metastore
> Affects Versions: 3.1.0
> Reporter: LinZhongwei
> Priority: Major
>
> hdp version:3.1.5.31-1
> hive version:3.1.0.3.1.5.31-1
> hadoop version:3.1.1.3.1.5.31-1
> We find that the hive metastore has memory leak if we set
> compactor.initiator.on to true.
> If we disable the configuration, the memory leak disappear.
> How can we resolve this problem?
> Even if we set the heap size of hive metastore to 40 GB, after 1 month the
> hive metastore service will be down with outofmemory.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)