[jira] [Created] (MAPREDUCE-6887) Modifier 'static' is redundant for inner enums less
ZhangBing Lin created MAPREDUCE-6887: Summary: Modifier 'static' is redundant for inner enums less Key: MAPREDUCE-6887 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6887 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 3.0.0-alpha3 Reporter: ZhangBing Lin Priority: Minor -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/311/ [May 10, 2017 1:47:48 PM] (jlowe) YARN-6552. Increase YARN test timeouts from 1 second to 10 seconds. [May 10, 2017 2:57:41 PM] (jlowe) MAPREDUCE-6882. Increase MapReduce test timeouts from 1 second to 10 [May 10, 2017 5:46:50 PM] (templedf) YARN-6475. Fix some long function checkstyle issues (Contributed by [May 10, 2017 6:02:31 PM] (jlowe) HDFS-11745. Increase HDFS test timeouts from 1 second to 10 seconds. [May 10, 2017 7:15:57 PM] (kihwal) HDFS-11755. Underconstruction blocks can be considered missing. [May 10, 2017 9:33:33 PM] (liuml07) HDFS-11800. Document output of 'hdfs count -u' should contain PATHNAME. [May 10, 2017 9:34:13 PM] (templedf) YARN-6571. Fix JavaDoc issues in SchedulingPolicy (Contributed by Weiwei [May 10, 2017 9:49:25 PM] (Carlo Curino) YARN-6473. Create ReservationInvariantChecker to validate [May 10, 2017 10:05:11 PM] (liuml07) HADOOP-14361. Azure: NativeAzureFileSystem.getDelegationToken() call [May 11, 2017 5:25:28 AM] (cdouglas) HDFS-11681. DatanodeStorageInfo#getBlockIterator() should return an -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverControllerStress hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.server.namenode.TestProcessCorruptBlocks hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 hadoop.hdfs.server.namenode.TestFSImage hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.hdfs.TestSafeMode hadoop.hdfs.TestDFSUpgrade hadoop.hdfs.server.namenode.ha.TestBootstrapStandby hadoop.hdfs.TestDistributedFileSystem hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure hadoop.hdfs.TestDFSShell hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 hadoop.hdfs.TestRollingUpgrade hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 hadoop.hdfs.TestFileAppend hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapred.TestShuffleHandler hadoop.yarn.sls.TestSLSRunner hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.yarn.client.api.impl.TestAMRMClient hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService hadoop.yarn.server.TestDiskFailures
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/400/ [May 10, 2017 10:29:47 AM] (aajisaka) HADOOP-14373. License error in org.apache.hadoop.metrics2.util.Servers. [May 10, 2017 10:57:12 AM] (aajisaka) HADOOP-14400. Fix warnings from spotbugs in hadoop-tools. Contributed by [May 10, 2017 1:47:48 PM] (jlowe) YARN-6552. Increase YARN test timeouts from 1 second to 10 seconds. [May 10, 2017 2:57:41 PM] (jlowe) MAPREDUCE-6882. Increase MapReduce test timeouts from 1 second to 10 [May 10, 2017 5:46:50 PM] (templedf) YARN-6475. Fix some long function checkstyle issues (Contributed by [May 10, 2017 6:02:31 PM] (jlowe) HDFS-11745. Increase HDFS test timeouts from 1 second to 10 seconds. [May 10, 2017 7:15:57 PM] (kihwal) HDFS-11755. Underconstruction blocks can be considered missing. [May 10, 2017 9:33:33 PM] (liuml07) HDFS-11800. Document output of 'hdfs count -u' should contain PATHNAME. [May 10, 2017 9:34:13 PM] (templedf) YARN-6571. Fix JavaDoc issues in SchedulingPolicy (Contributed by Weiwei [May 10, 2017 9:49:25 PM] (Carlo Curino) YARN-6473. Create ReservationInvariantChecker to validate [May 10, 2017 10:05:11 PM] (liuml07) HADOOP-14361. Azure: NativeAzureFileSystem.getDelegationToken() call [May 11, 2017 5:25:28 AM] (cdouglas) HDFS-11681. DatanodeStorageInfo#getBlockIterator() should return an -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-common-project/hadoop-minikdc Possible null pointer dereference in org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called method Dereferenced at MiniKdc.java:[line 368] FindBugs : module:hadoop-common-project/hadoop-auth org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest, HttpServletResponse) makes inefficient use of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 192] FindBugs : module:hadoop-common-project/hadoop-common org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At CipherSuite.java:[line 44] org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) unconditionally sets the field unknownValue At CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] Possible null pointer dereference in org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of called method Dereferenced at FileUtil.java:[line 118] Possible null pointer dereference in org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, File, Path, File) due to return value of called method Dereferenced at RawLocalFileSystem.java:[line 387] Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) ignored, but method has no side effect At FTPFileSystem.java:but method has no side effect At FTPFileSystem.java:[line 421] Useless condition:lazyPersist == true at this point At CommandWithDestination.java:[line 502] org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) incorrectly handles double value At DoubleWritable.java: At DoubleWritable.java:[line 78] org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) incorrectly handles double value At DoubleWritable.java:[line 97] org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly handles float value At FloatWritable.java: At FloatWritable.java:[line 71] org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, byte[], int, int) incorrectly handles float value At FloatWritable.java:int) incorrectly handles float value At FloatWritable.java:[line 89] Possible null pointer dereference in org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method Dereferenced at IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return value of called method
[jira] [Created] (MAPREDUCE-6886) Job History File Permissions configurable
Prabhu Joseph created MAPREDUCE-6886: Summary: Job History File Permissions configurable Key: MAPREDUCE-6886 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6886 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.7.1 Reporter: Prabhu Joseph Currently the mapreduce job history files are written with 770 permissions which can be accessed by job user or other user part of hadoop group. Customers has users who are not part of the hadoop group but want to access these history files. We can make it configurable like 770 (Strict) or 755 (All) permissions with default 770. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org