[
https://issues.apache.org/jira/browse/HDFS-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14522652#comment-14522652
]
Hadoop QA commented on HDFS-8303:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 14m 58s | Pre-patch trunk compilation is
healthy. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear
to include any new or modified tests. Please justify why no new tests are
needed for this patch. Also please list what manual steps were performed to
verify this patch. |
| {color:green}+1{color} | whitespace | 0m 0s | The patch has no lines that
end in whitespace. |
| {color:green}+1{color} | javac | 7m 41s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 10m 1s | There were no new javadoc
warning messages. |
| {color:green}+1{color} | release audit | 0m 22s | The applied patch does
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle | 5m 36s | There were no new checkstyle
issues. |
| {color:green}+1{color} | install | 1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 34s | The patch built with
eclipse:eclipse. |
| {color:red}-1{color} | findbugs | 3m 12s | The patch appears to introduce 1
new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native | 3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 214m 18s | Tests failed in hadoop-hdfs. |
| | | 261m 45s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| | Class org.apache.hadoop.hdfs.DataStreamer$LastException is not derived
from an Exception, even though it is named as such At DataStreamer.java:from
an Exception, even though it is named as such At DataStreamer.java:[lines
177-201] |
| Failed unit tests | hadoop.hdfs.TestDFSClientRetries |
| | hadoop.hdfs.server.namenode.TestDeleteRace |
| | hadoop.hdfs.server.datanode.TestBlockRecovery |
| | hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation |
| | hadoop.hdfs.TestClose |
| | hadoop.hdfs.TestDFSOutputStream |
| | hadoop.hdfs.TestCrcCorruption |
| | hadoop.hdfs.TestFileLengthOnClusterRestart |
| | hadoop.hdfs.TestQuota |
| | hadoop.hdfs.TestMultiThreadedHflush |
| | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
| | hadoop.cli.TestHDFSCLI |
| Timed out tests | org.apache.hadoop.hdfs.TestDataTransferProtocol |
| | org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
| | org.apache.hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
| | org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12729620/HDFS-8303.1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f0db797 |
| Findbugs warnings |
https://builds.apache.org/job/PreCommit-HDFS-Build/10491/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
|
| hadoop-hdfs test log |
https://builds.apache.org/job/PreCommit-HDFS-Build/10491/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/10491/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/10491/console |
This message was automatically generated.
> QJM should purge old logs in the current directory through FJM
> --------------------------------------------------------------
>
> Key: HDFS-8303
> URL: https://issues.apache.org/jira/browse/HDFS-8303
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Zhe Zhang
> Assignee: Zhe Zhang
> Attachments: HDFS-8303.0.patch, HDFS-8303.1.patch
>
>
> As the first step of the consolidation effort, QJM should call its FJM to
> purge the current directory.
> The current QJM logic of purging current dir is very similar to FJM purging
> logic.
> QJM:
> {code}
> private static final List<Pattern> CURRENT_DIR_PURGE_REGEXES =
> ImmutableList.of(
> Pattern.compile("edits_\\d+-(\\d+)"),
> Pattern.compile("edits_inprogress_(\\d+)(?:\\..*)?"));
> ...
> long txid = Long.parseLong(matcher.group(1));
> if (txid < minTxIdToKeep) {
> LOG.info("Purging no-longer needed file " + txid);
> if (!f.delete()) {
> ...
> {code}
> FJM:
> {code}
> private static final Pattern EDITS_REGEX = Pattern.compile(
> NameNodeFile.EDITS.getName() + "_(\\d+)-(\\d+)");
> private static final Pattern EDITS_INPROGRESS_REGEX = Pattern.compile(
> NameNodeFile.EDITS_INPROGRESS.getName() + "_(\\d+)");
> private static final Pattern EDITS_INPROGRESS_STALE_REGEX = Pattern.compile(
> NameNodeFile.EDITS_INPROGRESS.getName() + "_(\\d+).*(\\S+)");
> ...
> List<EditLogFile> editLogs = matchEditLogs(files, true);
> for (EditLogFile log : editLogs) {
> if (log.getFirstTxId() < minTxIdToKeep &&
> log.getLastTxId() < minTxIdToKeep) {
> purger.purgeLog(log);
> }
> }
> {code}
> I can see 2 differences:
> # Different regex in matching for empty/corrupt in-progress files. The FJM
> pattern makes more sense to me.
> # FJM verifies that both start and end txID of a finalized edit file to be
> old enough. This doesn't make sense because end txID is always larger than
> start txID
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)