[
https://issues.apache.org/jira/browse/HDFS-9395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15129393#comment-15129393
]
Hadoop QA commented on HDFS-9395:
---------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s {color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 8s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 4s {color}
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
27s {color} | {color:green} Patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 51s {color}
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests |
hadoop.hdfs.security.TestDelegationTokenForProxyUser |
| | hadoop.hdfs.server.datanode.TestBlockScanner |
| | hadoop.hdfs.TestFileAppend |
| | hadoop.hdfs.TestReconstructStripedFile |
| JDK v1.7.0_91 Failed junit tests |
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
| | hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider |
| | hadoop.hdfs.server.datanode.TestBlockScanner |
| | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
| | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
| | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12785853/HDFS-9395.002.patch |
| JIRA Issue | HDFS-9395 |
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle |
| uname | Linux 2879de1ae378 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | trunk / 4ae543f |
| Default Java | 1.7.0_91 |
| Multi-JDK versions | /usr/lib/jvm/java-8-oracle:1.8.0_66
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_91 |
| findbugs | v3.0.0 |
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/14347/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/14347/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
|
| unit test logs |
https://builds.apache.org/job/PreCommit-HDFS-Build/14347/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_66.txt
https://builds.apache.org/job/PreCommit-HDFS-Build/14347/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.7.0_91.txt
|
| JDK v1.7.0_91 Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/14347/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Max memory used | 77MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT http://yetus.apache.org |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/14347/console |
This message was automatically generated.
> HDFS operations vary widely in which failures they put in the audit log and
> which they leave out
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-9395
> URL: https://issues.apache.org/jira/browse/HDFS-9395
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Kihwal Lee
> Assignee: Kuhu Shukla
> Attachments: HDFS-9395.001.patch, HDFS-9395.002.patch
>
>
> So, the big question here is what should go in the audit log? All failures,
> or just "permission denied" failures? Or, to put it a different way, if
> someone attempts to do something and it fails because a file doesn't exist,
> is that worth an audit log entry?
> We are currently inconsistent on this point. For example, concat,
> getContentSummary, addCacheDirective, and setErasureEncodingPolicy create an
> audit log entry for all failures, but setOwner, delete, and setAclEntries
> attempt to only create an entry for AccessControlException-based failures.
> There are a few operations, like allowSnapshot, disallowSnapshot, and
> startRollingUpgrade that never create audit log failure entries at all. They
> simply log nothing for any failure, and log success for a successful
> operation.
> So to summarize, different HDFS operations currently fall into 3 categories:
> 1. audit-log all failures
> 2. audit-log only AccessControlException failures
> 3. never audit-log failures
> Which category is right? And how can we fix the inconsistency
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)