[
https://issues.apache.org/jira/browse/HADOOP-13263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15344271#comment-15344271
]
Stephen O'Donnell commented on HADOOP-13263:
--------------------------------------------
[~arpitagarwal] I got the style issues sorted, and I also removed the
synchronized block that was causing problems with find bugs and put it into the
class constructor. Not sure why I didn't do that originally, as I think that is
the best place for it. All my tests pass locally and from the last Hadoop QA
run it seems there is only 1 test failure that appears to be unrelated to this
patch:
{code}
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.551 sec <<<
FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics) Time
elapsed: 0.327 sec <<< FAILURE!
java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at
org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:161)
at
org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:139)
{code}
Assuming you are happy with the initialization of the executor happening in the
constructor I think this one is complete.
Do we need to raise a documentation jira to mentioned the new parameters to
enable the feature? You also mentioned an additional jira to remove delays from
the tests, making them more robust. Some of the original tests had delays in
them too, so that is something that should look over that entire test class I
guess.
> Reload cached groups in background after expiry
> -----------------------------------------------
>
> Key: HADOOP-13263
> URL: https://issues.apache.org/jira/browse/HADOOP-13263
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Stephen O'Donnell
> Assignee: Stephen O'Donnell
> Attachments: HADOOP-13263.001.patch, HADOOP-13263.002.patch,
> HADOOP-13263.003.patch, HADOOP-13263.004.patch, HADOOP-13263.005.patch,
> HADOOP-13263.006.patch
>
>
> In HADOOP-11238 the Guava cache was introduced to allow refreshes on the
> Namenode group cache to run in the background, avoiding many slow group
> lookups. Even with this change, I have seen quite a few clusters with issues
> due to slow group lookups. The problem is most prevalent in HA clusters,
> where a slow group lookup on the hdfs user can fail to return for over 45
> seconds causing the Failover Controller to kill it.
> The way the current Guava cache implementation works is approximately:
> 1) On initial load, the first thread to request groups for a given user
> blocks until it returns. Any subsequent threads requesting that user block
> until that first thread populates the cache.
> 2) When the key expires, the first thread to hit the cache after expiry
> blocks. While it is blocked, other threads will return the old value.
> I feel it is this blocking thread that still gives the Namenode issues on
> slow group lookups. If the call from the FC is the one that blocks and
> lookups are slow, if can cause the NN to be killed.
> Guava has the ability to refresh expired keys completely in the background,
> where the first thread that hits an expired key schedules a background cache
> reload, but still returns the old value. Then the cache is eventually
> updated. This patch introduces this background reload feature. There are two
> new parameters:
> 1) hadoop.security.groups.cache.background.reload - default false to keep the
> current behaviour. Set to true to enable a small thread pool and background
> refresh for expired keys
> 2) hadoop.security.groups.cache.background.reload.threads - only relevant if
> the above is set to true. Controls how many threads are in the background
> refresh pool. Default is 1, which is likely to be enough.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]