Wang XL created HDFS-12862:
------------------------------
Summary: When modify cacheDirective ,editLog myq serial relative
expiryTime
Key: HDFS-12862
URL: https://issues.apache.org/jira/browse/HDFS-12862
Project: Hadoop HDFS
Issue Type: Bug
Components: caching, hdfs
Affects Versions: 2.7.1
Environment:
Reporter: Wang XL
The logic in FSNDNCacheOp#modifyCacheDirective is not correct. when modify
cacheDirective,the expiration in directive may be a relative expiryTime, and
EditLog will
serial a relative expiry time.
{code:java}
// Some comments here
static void modifyCacheDirective(
FSNamesystem fsn, CacheManager cacheManager, CacheDirectiveInfo directive,
EnumSet<CacheFlag> flags, boolean logRetryCache) throws IOException {
final FSPermissionChecker pc = getFsPermissionChecker(fsn);
cacheManager.modifyDirective(directive, pc, flags);
fsn.getEditLog().logModifyCacheDirectiveInfo(directive, logRetryCache);
}
{code}
But when standBy NN reload the log ,it will call
FSImageSerialization#readCacheDirectiveInfo as a absolute expiryTime.It will
result in the inconsistency .
{code:java}
public static CacheDirectiveInfo readCacheDirectiveInfo(DataInput in)
throws IOException {
CacheDirectiveInfo.Builder builder =
new CacheDirectiveInfo.Builder();
builder.setId(readLong(in));
int flags = in.readInt();
if ((flags & 0x1) != 0) {
builder.setPath(new Path(readString(in)));
}
if ((flags & 0x2) != 0) {
builder.setReplication(readShort(in));
}
if ((flags & 0x4) != 0) {
builder.setPool(readString(in));
}
if ((flags & 0x8) != 0) {
builder.setExpiration(
CacheDirectiveInfo.Expiration.newAbsolute(readLong(in)));
}
if ((flags & ~0xF) != 0) {
throw new IOException("unknown flags set in " +
"ModifyCacheDirectiveInfoOp: " + flags);
}
return builder.build();
}
{code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]