[jira] [Created] (HDDS-1873) Recon should store last successful run timestamp for each task
Vivek Ratnavel Subramanian created HDDS-1873: Summary: Recon should store last successful run timestamp for each task Key: HDDS-1873 URL: https://issues.apache.org/jira/browse/HDDS-1873 Project: Hadoop Distributed Data Store Issue Type: Sub-task Components: Ozone Recon Affects Versions: 0.4.1 Reporter: Vivek Ratnavel Subramanian Recon should store last ozone manager snapshot received timestamp along with timestamps of last successful run for each task. This is important to give users a sense of how latest the current data that they are looking at is. And, we need this per task because some tasks might fail to run or might take much longer time to run than other tasks and this needs to be reflected in the UI for better and consistent user experience. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-14678) Allow triggerBlockReport to a specific namenode
Leon created HDFS-14678: --- Summary: Allow triggerBlockReport to a specific namenode Key: HDFS-14678 URL: https://issues.apache.org/jira/browse/HDFS-14678 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 2.8.2 Reporter: Leon In our largest prod cluster (running 2.8.2) we have >3k hosts. Every time when rolling restarting NNs we will need to wait for block report which takes >2.5 hours for each NN. One way to make it faster is to manually trigger a full block report from all datanodes. [HDFS-7278|https://issues.apache.org/jira/browse/HDFS-7278]. However, the current triggerBlockReport command will trigger a block report on all NNs which will flood the active NN as well. A quick solution will be adding an option to specify a NN that the manually triggered block report will go to, something like: *_hdfs dfsadmin [-triggerBlockReport [-incremental] ] [-namenode] _* So when doing a restart of standby NN or observer NN we can trigger an aggressive block report to a specific NN to exit safemode faster without risking active NN performance. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-14462) WebHDFS throws "Error writing request body to server" instead of DSQuotaExceededException
[ https://issues.apache.org/jira/browse/HDFS-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simbarashe Dzinamarira reopened HDFS-14462: --- [HDFS-11195|https://issues.apache.org/jira/browse/HDFS-11195] partially address this issue by fixing a typo in HDFSWriter where a default response was being returned instead of the actual response being handled. However, if a second write operation is performed after a DSQuotaExceededException has been handled, this write fails with a generic HTTPUrlConnection error message, "Error writing response to body" instead of the more informative DSQuotaExceededException. The cause of this error is that the input/response stream of the HTTP connection is not read to check for a remote exception before the generic message is thrown. > WebHDFS throws "Error writing request body to server" instead of > DSQuotaExceededException > - > > Key: HDFS-14462 > URL: https://issues.apache.org/jira/browse/HDFS-14462 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.2.0, 2.9.2, 3.0.3, 2.8.5, 2.7.7, 3.1.2 >Reporter: Erik Krogen >Assignee: Simbarashe Dzinamarira >Priority: Major > > We noticed recently in our environment that, when writing data to HDFS via > WebHDFS, a quota exception is returned to the client as: > {code} > java.io.IOException: Error writing request body to server > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3536) > ~[?:1.8.0_172] > at > sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3519) > ~[?:1.8.0_172] > at > java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) > ~[?:1.8.0_172] > at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) > ~[?:1.8.0_172] > at java.io.FilterOutputStream.flush(FilterOutputStream.java:140) > ~[?:1.8.0_172] > at java.io.DataOutputStream.flush(DataOutputStream.java:123) > ~[?:1.8.0_172] > {code} > It is entirely opaque to the user that this exception was caused because they > exceeded their quota. Yet in the DataNode logs: > {code} > 2019-04-24 02:13:09,639 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer > Exception > org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota > of /foo/path/here is exceeded: quota = B = X TB but diskspace > consumed = B = X TB > at > org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211) > at > org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239) > {code} > This was on a 2.7.x cluster, but I verified that the same logic exists on > trunk. I believe we need to fix some of the logic within the > {{ExceptionHandler}} to add special handling for the quota exception. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1872) Fix entry clean up from openKeyTable during complete MPU
Bharat Viswanadham created HDDS-1872: Summary: Fix entry clean up from openKeyTable during complete MPU Key: HDDS-1872 URL: https://issues.apache.org/jira/browse/HDDS-1872 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham # Initiate MPU adds entry to openKeyTable and multipartInfo table. # When completeMPU, we add the entry to keyTable and delete from multipartInfo table. Deleting from openKeyTable is missing in complete MPU. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/ [Jul 28, 2019 3:11:42 AM] (ayushsaxena) HDFS-14660. [SBN Read] ObserverNameNode should throw StandbyException -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] FindBugs : module:hadoop-tools/hadoop-aws Inconsistent synchronization of org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore.ttlTimeProvider; locked 75% of time Unsynchronized access at LocalMetadataStore.java:75% of time Unsynchronized access at LocalMetadataStore.java:[line 623] Failed junit tests : hadoop.util.TestReadWriteDiskValidator hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.yarn.applications.distributedshell.TestDistributedShell cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-pylint.txt [216K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/xml.txt [16K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1212/artifact/out/branch-findbugs-hadoop-ozone_client.txt [8.0K]
[jira] [Created] (HDFS-14677) TestDataNodeHotSwapVolumes#testAddVolumesConcurrently fails intermittently in trunk
Chen Zhang created HDFS-14677: - Summary: TestDataNodeHotSwapVolumes#testAddVolumesConcurrently fails intermittently in trunk Key: HDFS-14677 URL: https://issues.apache.org/jira/browse/HDFS-14677 Project: Hadoop HDFS Issue Type: Bug Reporter: Chen Zhang Stacktrace: {code:java} java.lang.NullPointerException at org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes.testAddVolumesConcurrently(TestDataNodeHotSwapVolumes.java:615) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:748) {code} see: [https://builds.apache.org/job/PreCommit-HDFS-Build/27328/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeHotSwapVolumes/testAddVolumesConcurrently/] and [https://builds.apache.org/job/PreCommit-HDFS-Build/27312/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeHotSwapVolumes/testAddVolumesConcurrently/] -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Any thoughts making Submarine a separate Apache project?
Thanks Vinod, the proposal to make it be TLP definitely a great suggestion. I will draft a proposal and keep the thread posted. Best, Wangda On Mon, Jul 29, 2019 at 3:46 PM Vinod Kumar Vavilapalli wrote: > Looks like there's a meaningful push behind this. > > Given the desire is to fork off Apache Hadoop, you'd want to make sure > this enthusiasm turns into building a real, independent but more > importantly a sustainable community. > > Given that there were two official releases off the Apache Hadoop project, > I doubt if you'd need to go through the incubator process. Instead you can > directly propose a new TLP at ASF board. The last few times this happened > was with ORC, and long before that with Hive, HBase etc. Can somebody who > have cycles and been on the ASF lists for a while look into the process > here? > > For the Apache Hadoop community, this will be treated simply as > code-change and so need a committer +1? You can be more gently by formally > doing a vote once a process doc is written down. > > Back to the sustainable community point, as part of drafting this > proposal, you'd definitely want to make sure all of the Apache Hadoop > PMC/Committers can exercise their will to join this new project as > PMC/Committers respectively without any additional constraints. > > Thanks > +Vinod > > > On Jul 25, 2019, at 1:31 PM, Wangda Tan wrote: > > > > Thanks everybody for sharing your thoughts. I saw positive feedbacks from > > 20+ contributors! > > > > So I think we should move it forward, any suggestions about what we > should > > do? > > > > Best, > > Wangda > > > > On Mon, Jul 22, 2019 at 5:36 PM neo wrote: > > > >> +1, This is neo from TiDB & TiKV community. > >> Thanks Xun for bring this up. > >> > >> Our CNCF project's open source distributed KV storage system TiKV, > >> Hadoop submarine's machine learning engine helps us to optimize data > >> storage, > >> helping us solve some problems in data hotspots and data shuffers. > >> > >> We are ready to improve the performance of TiDB in our open source > >> distributed relational database TiDB and also using the hadoop submarine > >> machine learning engine. > >> > >> I think if submarine can be independent, it will develop faster and > better. > >> Thanks to the hadoop community for developing submarine! > >> > >> Best Regards, > >> neo > >> www.pingcap.com / https://github.com/pingcap/tidb / > >> https://github.com/tikv > >> > >> Xun Liu 于2019年7月22日周一 下午4:07写道: > >> > >>> @adam.antal > >>> > >>> The submarine development team has completed the following > preparations: > >>> 1. Established a temporary test repository on Github. > >>> 2. Change the package name of hadoop submarine from > org.hadoop.submarine > >> to > >>> org.submarine > >>> 3. Combine the Linkedin/TonY code into the Hadoop submarine module; > >>> 4. On the Github docked travis-ci system, all test cases have been > >> tested; > >>> 5. Several Hadoop submarine users completed the system test using the > >> code > >>> in this repository. > >>> > >>> 赵欣 于2019年7月22日周一 上午9:38写道: > >>> > Hi > > I am a teacher at Southeast University (https://www.seu.edu.cn/). We > >> are > a major in electrical engineering. Our teaching teams and students use > bigoop submarine for big data analysis and automation control of > >>> electrical > equipment. > > Many thanks to the hadoop community for providing us with machine > >>> learning > tools like submarine. > > I wish hadoop submarine is getting better and better. > > > == > 赵欣 > 东南大学电气工程学院 > > - > > Zhao XIN > > School of Electrical Engineering > > == > 2019-07-18 > > > *From:* Xun Liu > *Date:* 2019-07-18 09:46 > *To:* xinzhao > *Subject:* Fwd: Re: Any thoughts making Submarine a separate Apache > project? > > > -- Forwarded message - > 发件人: dashuiguailu...@gmail.com > Date: 2019年7月17日周三 下午3:17 > Subject: Re: Re: Any thoughts making Submarine a separate Apache > >> project? > To: Szilard Nemeth , runlin zhang < > runlin...@gmail.com> > Cc: Xun Liu , common-dev < > >>> common-...@hadoop.apache.org>, > yarn-dev , hdfs-dev < > hdfs-dev@hadoop.apache.org>, mapreduce-dev < > mapreduce-...@hadoop.apache.org>, submarine-dev < > submarine-...@hadoop.apache.org> > > > +1 ,Good idea, we are very much looking forward to it. > > -- > dashuiguailu...@gmail.com > > > *From:* Szilard Nemeth > *Date:* 2019-07-17 14:55 > *To:* runlin zhang > *CC:* Xun Liu ; Hadoop Common > ; yarn-dev >; > Hdfs-dev ; mapreduce-dev > ; submarine-dev > > *Subject:* Re: Any thoughts making Submarine a separate
Re: Any thoughts making Submarine a separate Apache project?
Looks like there's a meaningful push behind this. Given the desire is to fork off Apache Hadoop, you'd want to make sure this enthusiasm turns into building a real, independent but more importantly a sustainable community. Given that there were two official releases off the Apache Hadoop project, I doubt if you'd need to go through the incubator process. Instead you can directly propose a new TLP at ASF board. The last few times this happened was with ORC, and long before that with Hive, HBase etc. Can somebody who have cycles and been on the ASF lists for a while look into the process here? For the Apache Hadoop community, this will be treated simply as code-change and so need a committer +1? You can be more gently by formally doing a vote once a process doc is written down. Back to the sustainable community point, as part of drafting this proposal, you'd definitely want to make sure all of the Apache Hadoop PMC/Committers can exercise their will to join this new project as PMC/Committers respectively without any additional constraints. Thanks +Vinod > On Jul 25, 2019, at 1:31 PM, Wangda Tan wrote: > > Thanks everybody for sharing your thoughts. I saw positive feedbacks from > 20+ contributors! > > So I think we should move it forward, any suggestions about what we should > do? > > Best, > Wangda > > On Mon, Jul 22, 2019 at 5:36 PM neo wrote: > >> +1, This is neo from TiDB & TiKV community. >> Thanks Xun for bring this up. >> >> Our CNCF project's open source distributed KV storage system TiKV, >> Hadoop submarine's machine learning engine helps us to optimize data >> storage, >> helping us solve some problems in data hotspots and data shuffers. >> >> We are ready to improve the performance of TiDB in our open source >> distributed relational database TiDB and also using the hadoop submarine >> machine learning engine. >> >> I think if submarine can be independent, it will develop faster and better. >> Thanks to the hadoop community for developing submarine! >> >> Best Regards, >> neo >> www.pingcap.com / https://github.com/pingcap/tidb / >> https://github.com/tikv >> >> Xun Liu 于2019年7月22日周一 下午4:07写道: >> >>> @adam.antal >>> >>> The submarine development team has completed the following preparations: >>> 1. Established a temporary test repository on Github. >>> 2. Change the package name of hadoop submarine from org.hadoop.submarine >> to >>> org.submarine >>> 3. Combine the Linkedin/TonY code into the Hadoop submarine module; >>> 4. On the Github docked travis-ci system, all test cases have been >> tested; >>> 5. Several Hadoop submarine users completed the system test using the >> code >>> in this repository. >>> >>> 赵欣 于2019年7月22日周一 上午9:38写道: >>> Hi I am a teacher at Southeast University (https://www.seu.edu.cn/). We >> are a major in electrical engineering. Our teaching teams and students use bigoop submarine for big data analysis and automation control of >>> electrical equipment. Many thanks to the hadoop community for providing us with machine >>> learning tools like submarine. I wish hadoop submarine is getting better and better. == 赵欣 东南大学电气工程学院 - Zhao XIN School of Electrical Engineering == 2019-07-18 *From:* Xun Liu *Date:* 2019-07-18 09:46 *To:* xinzhao *Subject:* Fwd: Re: Any thoughts making Submarine a separate Apache project? -- Forwarded message - 发件人: dashuiguailu...@gmail.com Date: 2019年7月17日周三 下午3:17 Subject: Re: Re: Any thoughts making Submarine a separate Apache >> project? To: Szilard Nemeth , runlin zhang < runlin...@gmail.com> Cc: Xun Liu , common-dev < >>> common-...@hadoop.apache.org>, yarn-dev , hdfs-dev < hdfs-dev@hadoop.apache.org>, mapreduce-dev < mapreduce-...@hadoop.apache.org>, submarine-dev < submarine-...@hadoop.apache.org> +1 ,Good idea, we are very much looking forward to it. -- dashuiguailu...@gmail.com *From:* Szilard Nemeth *Date:* 2019-07-17 14:55 *To:* runlin zhang *CC:* Xun Liu ; Hadoop Common ; yarn-dev ; Hdfs-dev ; mapreduce-dev ; submarine-dev *Subject:* Re: Any thoughts making Submarine a separate Apache project? +1, this is a very great idea. As Hadoop repository has already grown huge and contains many >> projects, I think in general it's a good idea to separate projects in the early >>> phase. On Wed, Jul 17, 2019, 08:50 runlin zhang wrote: > +1 ,That will be great ! > >> 在 2019年7月10日,下午3:34,Xun Liu 写道: >> >> Hi all, >> >> This is Xun Liu contributing to the Submarine
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.mapreduce.v2.app.TestRecovery cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_212.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [240K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [20K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/397/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [72K]
[jira] [Created] (HDDS-1871) Remove anti-affinity rules from k8s minkube example
Elek, Marton created HDDS-1871: -- Summary: Remove anti-affinity rules from k8s minkube example Key: HDDS-1871 URL: https://issues.apache.org/jira/browse/HDDS-1871 Project: Hadoop Distributed Data Store Issue Type: Bug Components: kubernetes Reporter: Elek, Marton Assignee: Elek, Marton HDDS-1646 introduced real persistence for k8s example deployment files which means that we need anti-affinity scheduling rules: Even if we use statefulset instead of daemonset we would like to start one datanode per real nodes. With minikube we have only one node therefore the scheduling rule should be removed to enable at least 3 datanodes on the same physical nodes. How to test: {code} mvn clean install -DskipTests -f pom.ozone.xml cd hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/kubernetes/examples/minikube minikube start kubectl apply -f . kc get pod {code} You should see 3 datanode instances. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1870) ConcurrentModification at PrometheusMetricsSink
Doroszlai, Attila created HDDS-1870: --- Summary: ConcurrentModification at PrometheusMetricsSink Key: HDDS-1870 URL: https://issues.apache.org/jira/browse/HDDS-1870 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Doroszlai, Attila Assignee: Doroszlai, Attila Encountered on {{ozoneperf}} compose env when running low on CPU: {code} om_1 | java.util.ConcurrentModificationException om_1 | at java.base/java.util.HashMap$HashIterator.nextNode(HashMap.java:1493) om_1 | at java.base/java.util.HashMap$ValueIterator.next(HashMap.java:1521) om_1 | at org.apache.hadoop.hdds.server.PrometheusMetricsSink.writeMetrics(PrometheusMetricsSink.java:123) om_1 | at org.apache.hadoop.hdds.server.PrometheusServlet.doGet(PrometheusServlet.java:43) {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org