Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/ [Jun 20, 2018 5:42:13 PM] (haibochen) YARN-8437. Build oom-listener fails on older versions. (Miklos Szegedi [Jun 20, 2018 5:59:33 PM] (miklos.szegedi) YARN-8391. Investigate AllocationFileLoaderService.reloadListener [Jun 20, 2018 6:36:12 PM] (miklos.szegedi) YARN-8440. Typo in YarnConfiguration javadoc: "Miniumum request [Jun 20, 2018 6:40:56 PM] (miklos.szegedi) YARN-7449. Split up class TestYarnClient to TestYarnClient and [Jun 20, 2018 6:55:43 PM] (miklos.szegedi) YARN-8442. Strange characters and missing spaces in FairScheduler [Jun 20, 2018 6:58:18 PM] (miklos.szegedi) YARN-8441. Typo in CSQueueUtils local variable names: [Jun 20, 2018 7:04:44 PM] (miklos.szegedi) MAPREDUCE-7113. Typos in test names in TestTaskAttempt: [Jun 20, 2018 10:45:08 PM] (mackrorysd) HADOOP-14918. Remove the Local Dynamo DB test option. Contributed by [Jun 20, 2018 10:58:26 PM] (xiao) HDFS-13682. Cannot create encryption zone after KMS auth token expires. [Jun 20, 2018 11:43:10 PM] (todd) HADOOP-15551. Avoid use of Arrays.stream in Configuration.addTags -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.security.TestRaceWhenRelogin hadoop.security.TestFixKerberosTicketOrder hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageSchema hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageDomain hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.mapred.TestMRTimelineEventHandling cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/diff-compile-javac-root.txt [352K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/diff-checkstyle-root.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/branch-findbugs-hadoop-hdds_client.txt [56K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [48K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [60K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/branch-findbugs-hadoop-hdds_tools.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/branch-findbugs-hadoop-ozone_client.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/branch-findbugs-hadoop-ozone_common.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/818/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt [4.0K]
[jira] [Created] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone
Takanobu Asanuma created HDDS-184: - Summary: Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone Key: HDDS-184 URL: https://issues.apache.org/jira/browse/HDDS-184 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma This is a separated task from HADOOP-15495 for simplicity. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-183) Create KeyValueContainerManager class
Bharat Viswanadham created HDDS-183: --- Summary: Create KeyValueContainerManager class Key: HDDS-183 URL: https://issues.apache.org/jira/browse/HDDS-183 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Bharat Viswanadham This class is used to handle keyValueContainer operations. In this jira, adding to build container map when datanode starts up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-182) Integrate HddsDispatcher
Hanisha Koneru created HDDS-182: --- Summary: Integrate HddsDispatcher Key: HDDS-182 URL: https://issues.apache.org/jira/browse/HDDS-182 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Hanisha Koneru Assignee: Hanisha Koneru 1. Commands from SCM to Datanode should go through the new HddsDispatcher. 2. Cleanup container-service's ozone.container.common package. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/504/ [Jun 20, 2018 5:42:13 PM] (haibochen) YARN-8437. Build oom-listener fails on older versions. (Miklos Szegedi [Jun 20, 2018 5:59:33 PM] (miklos.szegedi) YARN-8391. Investigate AllocationFileLoaderService.reloadListener [Jun 20, 2018 6:36:12 PM] (miklos.szegedi) YARN-8440. Typo in YarnConfiguration javadoc: "Miniumum request [Jun 20, 2018 6:40:56 PM] (miklos.szegedi) YARN-7449. Split up class TestYarnClient to TestYarnClient and [Jun 20, 2018 6:55:43 PM] (miklos.szegedi) YARN-8442. Strange characters and missing spaces in FairScheduler [Jun 20, 2018 6:58:18 PM] (miklos.szegedi) YARN-8441. Typo in CSQueueUtils local variable names: [Jun 20, 2018 7:04:44 PM] (miklos.szegedi) MAPREDUCE-7113. Typos in test names in TestTaskAttempt: [Jun 20, 2018 10:45:08 PM] (mackrorysd) HADOOP-14918. Remove the Local Dynamo DB test option. Contributed by [Jun 20, 2018 10:58:26 PM] (xiao) HDFS-13682. Cannot create encryption zone after KMS auth token expires. [Jun 20, 2018 11:43:10 PM] (todd) HADOOP-15551. Avoid use of Arrays.stream in Configuration.addTags -1 overall The following subsystems voted -1: compile mvninstall pathlen unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 00m 00s) unit Specific tests: Failed junit tests : hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec hadoop.fs.contract.rawlocal.TestRawlocalContractAppend hadoop.fs.TestFileUtil hadoop.fs.TestFsShellCopy hadoop.fs.TestFsShellList hadoop.fs.TestLocalFileSystem hadoop.http.TestHttpServer hadoop.http.TestHttpServerLogs hadoop.io.nativeio.TestNativeIO hadoop.ipc.TestIPC hadoop.ipc.TestSocketFactory hadoop.metrics2.impl.TestStatsDMetrics hadoop.security.TestGroupsCaching hadoop.security.TestSecurityUtil hadoop.security.TestShellBasedUnixGroupsMapping hadoop.security.token.TestDtUtilShell hadoop.util.TestDiskCheckerWithDiskIo hadoop.util.TestNativeCodeLoader hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage hadoop.hdfs.server.datanode.TestBlockScanner hadoop.hdfs.server.datanode.TestDataNodeFaultInjector hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics hadoop.hdfs.server.namenode.TestNameNodeMXBean hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs hadoop.hdfs.TestDFSShell hadoop.hdfs.TestDFSStripedInputStream hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy hadoop.hdfs.TestDFSStripedOutputStreamWithFailure hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.TestDFSUpgradeFromImage hadoop.hdfs.TestFetchImage hadoop.hdfs.TestFileConcurrentReader hadoop.hdfs.TestHDFSFileSystemContract hadoop.hdfs.TestLeaseRecovery hadoop.hdfs.TestPread hadoop.hdfs.TestSecureEncryptionZoneWithKMS hadoop.hdfs.TestTrashWithSecureEncryptionZones hadoop.hdfs.tools.TestDFSAdmin hadoop.hdfs.web.TestWebHDFS hadoop.hdfs.web.TestWebHdfsUrl hadoop.fs.http.server.TestHttpFSServerWebServer hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch hadoop.yarn.server.nodemanager.containermanager.TestAuxServices hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestContainerExecutor hadoop.yarn.server.nodemanager.TestNodeManagerResync hadoop.yarn.server.nodemanager.TestNodeStatusUpdater hadoop.yarn.server.webproxy.amfilter.TestAmFilter hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
Re: HADOOP-14163 proposal for new hadoop.apache.org
Thank you very much to bump up this thread. About [2]: (Just for the clarification) the content of the proposed website is exactly the same as the old one. About [1]. I believe that the "mvn site" is perfect for the documentation but for website creation there are more simple and powerful tools. Hugo has more simple compared to jekyll. Just one binary, without dependencies, works everywhere (mac, linux, windows) Hugo has much more powerful compared to "mvn site". Easier to create/use more modern layout/theme, and easier to handle the content (for example new release announcements could be generated as part of the release process) I think it's very low risk to try out a new approach for the site (and easy to rollback in case of problems) Marton ps: I just updated the patch/preview site with the recent releases: *** * http://hadoop.anzix.net * *** On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote: Got pinged about this offline. Thanks for keeping at it, Marton! I think there are two road-blocks here (1) Is the mechanism using which the website is built good enough - mvn-site / hugo etc? (2) Is the new website good enough? For (1), I just think we need more committer attention and get feedback rapidly and get it in. For (2), how about we do it in a different way in the interest of progress? - We create a hadoop.apache.org/new-site/ where this new site goes. - We then modify the existing web-site to say that there is a new site/experience that folks can click on a link and navigate to - As this new website matures and gets feedback & fixes, we finally pull the plug at a later point of time when we think we are good to go. Thoughts? +Vinod On Feb 16, 2018, at 3:10 AM, Elek, Marton wrote: Hi, I would like to bump this thread up. TLDR; There is a proposed version of a new hadoop site which is available from here: https://elek.github.io/hadoop-site-proposal/ and https://issues.apache.org/jira/browse/HADOOP-14163 Please let me know what you think about it. Longer version: This thread started long time ago to use a more modern hadoop site: Goals were: 1. To make it easier to manage it (the release entries could be created by a script as part of the release process) 2. To use a better look-and-feel 3. Move it out from svn to git I proposed to: 1. Move the existing site to git and generate it with hugo (which is a single, standalone binary) 2. Move both the rendered and source branches to git. 3. (Create a jenkins job to generate the site automatically) NOTE: this is just about forrest based hadoop.apache.org, NOT about the documentation which is generated by mvn-site (as before) I got multiple valuable feedback and I improved the proposed site according to the comments. Allen had some concerns about the used technologies (hugo vs. mvn-site) and I answered all the questions why I think mvn-site is the best for documentation and hugo is best for generating site. I would like to finish this effort/jira: I would like to start a discussion about using this proposed version and approach as a new site of Apache Hadoop. Please let me know what you think. Thanks a lot, Marton - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13693) Remove unnecessary search in INodeDirectory.addChild during image loading
zhouyingchao created HDFS-13693: --- Summary: Remove unnecessary search in INodeDirectory.addChild during image loading Key: HDFS-13693 URL: https://issues.apache.org/jira/browse/HDFS-13693 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: zhouyingchao In FSImageFormatPBINode.loadINodeDirectorySection, all child INodes are added to their parent INode's map one by one. The adding procedure will search a position in the parent's map and then insert the child to the position. However, during image loading, the search is unnecessary since the insert position should always be at the end of the map given the sequence they are serialized on disk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfoTreeset
Yiqun Lin created HDFS-13692: Summary: StorageInfoDefragmenter floods log when compacting StorageInfoTreeset Key: HDFS-13692 URL: https://issues.apache.org/jira/browse/HDFS-13692 Project: Hadoop HDFS Issue Type: Improvement Affects Versions: 3.1.0 Reporter: Yiqun Lin StorageInfoDefragmenter floods log when compacting StorageInfoTreeset. In {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the StorageInfo under each DN. If there are 1k nodes in cluster, and each node has 10 data dir configured, it will print 10k lines every compact interval time (10 mins). The log looks large, we could switch log level from INFO to DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}. {noformat} 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 0.876264591439 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 0.9330040998881849 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 0.9314626719970249 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 0.9309580852251582 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 0.8938870614035088 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 0.8963103205353998 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 0.8950508004926109 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 0.8947356866877415 {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org