[jira] [Created] (HDDS-2520) Sonar: Avoid temporary variable scmSecurityClient
Dinesh Chitlangia created HDDS-2520: --- Summary: Sonar: Avoid temporary variable scmSecurityClient Key: HDDS-2520 URL: https://issues.apache.org/jira/browse/HDDS-2520 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Dinesh Chitlangia Assignee: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWL=AW5md_APKcVY8lQ4ZsWL -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2519) Sonar: Double Brace Initialization should not be used
Dinesh Chitlangia created HDDS-2519: --- Summary: Sonar: Double Brace Initialization should not be used Key: HDDS-2519 URL: https://issues.apache.org/jira/browse/HDDS-2519 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Dinesh Chitlangia Assignee: Dinesh Chitlangia https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_APKcVY8lQ4ZsWN=AW5md_APKcVY8lQ4ZsWN -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2461) Logging by ChunkUtils is misleading
[ https://issues.apache.org/jira/browse/HDDS-2461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2461. -- Fix Version/s: 0.5.0 Resolution: Fixed > Logging by ChunkUtils is misleading > --- > > Key: HDDS-2461 > URL: https://issues.apache.org/jira/browse/HDDS-2461 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > During a k8s based test I found a lot of log message like: > {code:java} > 2019-11-12 14:27:13 WARN ChunkManagerImpl:209 - Duplicate write chunk > request. Chunk overwrite without explicit request. > ChunkInfo{chunkName='A9UrLxiEUN_testdata_chunk_4465025, offset=0, len=1024} > {code} > I was very surprised as at ChunkManagerImpl:209 there was no similar lines. > It turned out that it's logged by ChunkUtils but it's used the logger of > ChunkManagerImpl. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2513) Remove this unused "COMPONENT" private field.
[ https://issues.apache.org/jira/browse/HDDS-2513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2513. -- Fix Version/s: 0.5.0 Resolution: Fixed > Remove this unused "COMPONENT" private field. > - > > Key: HDDS-2513 > URL: https://issues.apache.org/jira/browse/HDDS-2513 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Minor > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Remove this unused "COMPONENT" private field in class > XceiverClientGrpc > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsWG=false] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2501) Sonar: Fix issues found in the ObjectEndpoint class
[ https://issues.apache.org/jira/browse/HDDS-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2501. - Fix Version/s: 0.5.0 Resolution: Fixed Thanks [~adoroszlai]for filing the issue, [~swagle] for the fix. > Sonar: Fix issues found in the ObjectEndpoint class > --- > > Key: HDDS-2501 > URL: https://issues.apache.org/jira/browse/HDDS-2501 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: S3 >Reporter: Attila Doroszlai >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Ensure {{ObjectOutputStream}} is closed in {{ObjectEndpoint}}: > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-j-KcVY8lQ4Zr96=AW5md-j-KcVY8lQ4Zr96 > And fix other issues in the same file: > https://sonarcloud.io/project/issues?fileUuids=AW5md-HdKcVY8lQ4ZrVc=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2502) Close ScmClient in RatisInsight
[ https://issues.apache.org/jira/browse/HDDS-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2502. -- Fix Version/s: 0.5.0 Resolution: Fixed > Close ScmClient in RatisInsight > --- > > Key: HDDS-2502 > URL: https://issues.apache.org/jira/browse/HDDS-2502 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Attila Doroszlai >Assignee: Siddharth Wagle >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > {{ScmClient}} in {{RatisInsight}} should be closed after use. > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mYKcVY8lQ4Zr_s=AW5md-mYKcVY8lQ4Zr_s > Also two other minor issues reported in the same file: > https://sonarcloud.io/project/issues?fileUuids=AW5md-HeKcVY8lQ4ZrXL=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/507/ [Nov 15, 2019 4:50:51 AM] (weichiu) HDFS-14884. Add sanity check that zone key equals feinfo key while [Nov 15, 2019 4:53:39 AM] (aajisaka) HADOOP-15097. AbstractContractDeleteTest::testDeleteNonEmptyDirRecursive [Nov 15, 2019 3:40:20 PM] (ekrogen) HDFS-14979 Allow Balancer to submit getBlocks calls to Observer Nodes [Nov 15, 2019 5:21:30 PM] (ericp) YARN-7411. Inter-Queue preemption's computeFixpointAllocation need to [Nov 15, 2019 7:00:06 PM] (ericp) YARN-8179: Preemption does not happen due to natural_termination_factor [Nov 15, 2019 10:01:28 PM] (ericp) Revert "YARN-7411. Inter-Queue preemption's computeFixpointAllocation - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2507) Remove the hard-coded exclusion of TestMiniChaosOzoneCluster
[ https://issues.apache.org/jira/browse/HDDS-2507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2507. -- Fix Version/s: 0.5.0 Resolution: Fixed > Remove the hard-coded exclusion of TestMiniChaosOzoneCluster > > > Key: HDDS-2507 > URL: https://issues.apache.org/jira/browse/HDDS-2507 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > We excluded the execution of TestMiniChaosOzoneCluster from the > hadoop-ozone/dev-support/checks/integration.sh because it was not stable > enough. > Unfortunately this exclusion makes it impossible to use custom exclusion > lists (-Dsurefire.excludesFile=) as excludesFile can't be used if > -Dtest=!... is already used. > I propose to remove this exclusion to make it possible to use different > exclusion for different runs (pr check, daily, etc.) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2511) Fix Sonar issues in OzoneManagerServiceProviderImpl
[ https://issues.apache.org/jira/browse/HDDS-2511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2511. -- Resolution: Fixed > Fix Sonar issues in OzoneManagerServiceProviderImpl > --- > > Key: HDDS-2511 > URL: https://issues.apache.org/jira/browse/HDDS-2511 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Recon >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Link to the list of issues : > https://sonarcloud.io/project/issues?fileUuids=AW5md-HdKcVY8lQ4ZrUn=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2515) No need to call "toString()" method as formatting and string conversion is done by the Formatter
[ https://issues.apache.org/jira/browse/HDDS-2515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2515. -- Fix Version/s: 0.5.0 Resolution: Fixed > No need to call "toString()" method as formatting and string conversion is > done by the Formatter > > > Key: HDDS-2515 > URL: https://issues.apache.org/jira/browse/HDDS-2515 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Abhishek Purohit >Assignee: Abhishek Purohit >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV4=false] > Class: XceiverClientGrpc > {code:java} > if (LOG.isDebugEnabled()) { LOG.debug("Nodes in pipeline : {}", > pipeline.getNodes().toString()); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2500) Avoid fall-through in CloseContainerCommandHandler
[ https://issues.apache.org/jira/browse/HDDS-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2500. - Fix Version/s: 0.5.0 Resolution: Fixed Thanks [~ccondit] for the fix, thanks [~adoroszlai] for reporting the issue and reviewing the patch. > Avoid fall-through in CloseContainerCommandHandler > -- > > Key: HDDS-2500 > URL: https://issues.apache.org/jira/browse/HDDS-2500 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Datanode >Reporter: Attila Doroszlai >Assignee: Craig Condit >Priority: Minor > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Two instances of fall-through: > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7UKcVY8lQ4ZsRk=AW5md-7UKcVY8lQ4ZsRk > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7UKcVY8lQ4ZsRj=AW5md-7UKcVY8lQ4ZsRj > Both seem OK, but unnecessary (the next branch is {{break}}-only). Could be > made more explicit by moving/adding {{break}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2375) Refactor BlockOutputStream to allow flexible buffering
[ https://issues.apache.org/jira/browse/HDDS-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao resolved HDDS-2375. -- Resolution: Fixed Thanks [~szetszwo] for the contribution and all for the reviews. I've merged the PR to master. > Refactor BlockOutputStream to allow flexible buffering > -- > > Key: HDDS-2375 > URL: https://issues.apache.org/jira/browse/HDDS-2375 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > In HDDS-2331, we found that Ozone client allocates a ByteBuffer with chunk > size (e.g. 16MB ) to store data, unregarded the actual data size. The > ByteBuffer will create a byte[] with chunk size. When the ByteBuffer is > wrapped to a ByteString the byte[] remains in the ByteString. > As a result, when the actual data size is small (e.g. 1MB), a lot of memory > spaces (15MB) are wasted. > In this JIRA, we refactor BlockOutputStream so that the buffering becomes > more flexible. In a later JIRA (HDDS-2386), we implement chunk buffer using > a list of smaller buffers which are allocated only if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2418) Add the list trash command server side handling.
[ https://issues.apache.org/jira/browse/HDDS-2418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2418. - Fix Version/s: 0.5.0 Resolution: Fixed [~MatthewSharp] Thank you for the contribution. [~aengineer] Thanks for reviews and committing this to master. > Add the list trash command server side handling. > > > Key: HDDS-2418 > URL: https://issues.apache.org/jira/browse/HDDS-2418 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Manager >Reporter: Anu Engineer >Assignee: Matthew Sharp >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Add the standard code for any command handling in the server side. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-14991) Backport HDFS-14346 Better time precision in getTimeDuration to branch-2
Chen Liang created HDFS-14991: - Summary: Backport HDFS-14346 Better time precision in getTimeDuration to branch-2 Key: HDFS-14991 URL: https://issues.apache.org/jira/browse/HDFS-14991 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Reporter: Chen Liang Assignee: Chen Liang This is to backport HDFS-14346 to branch 2, as Standby reads in branch-2 requires being able to properly specify ms time granularity for Edit log tailing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2518) Ensure RATIS leader info is properly updated with pipeline report.
Xiaoyu Yao created HDDS-2518: Summary: Ensure RATIS leader info is properly updated with pipeline report. Key: HDDS-2518 URL: https://issues.apache.org/jira/browse/HDDS-2518 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao HDDS-2034 added async pipeline creation and report handling to SCM. The leader information is not properly populated as manifested in the test failures from TestSCMPipelineManager#testPipelineReport. This ticket is opened to fix it. cc: [~sammichen] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [DISCUSS] Making 2.10 the last minor 2.x release
+1, thanks Jonathan for bringing this up! On Fri, Nov 15, 2019 at 11:41 AM epa...@apache.org wrote: > Thanks Jonathan for opening the discussion. > > I am not in favor of this proposal. 2.10 was very recently released, and > moving to 2.10 will take some time for the community. It seems premature to > make a decision at this point that there will never be a need for a 2.11 > release. > > -Eric > > > On Thursday, November 14, 2019, 8:51:59 PM CST, Jonathan Hung < > jyhung2...@gmail.com> wrote: > > Hi folks, > > Given the release of 2.10.0, and the fact that it's intended to be a bridge > release to Hadoop 3.x [1], I'm proposing we make 2.10.x the last minor > release line in branch-2. Currently, the main issue is that there's many > fixes going into branch-2 (the theoretical 2.11.0) that's not going into > branch-2.10 (which will become 2.10.1), so the fixes in branch-2 will > likely never see the light of day unless they are backported to > branch-2.10. > > To do this, I propose we: > > - Delete branch-2.10 > - Rename branch-2 to branch-2.10 > - Set version in the new branch-2.10 to 2.10.1-SNAPSHOT > > This way we get all the current branch-2 fixes into the 2.10.x release > line. Then the commit chain will look like: trunk -> branch-3.2 -> > branch-3.1 -> branch-2.10 -> branch-2.9 -> branch-2.8 > > Thoughts? > > Jonathan Hung > > [1] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg29479.html > > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
Re: [DISCUSS] Making 2.10 the last minor 2.x release
Thanks Jonathan for opening the discussion. I am not in favor of this proposal. 2.10 was very recently released, and moving to 2.10 will take some time for the community. It seems premature to make a decision at this point that there will never be a need for a 2.11 release. -Eric On Thursday, November 14, 2019, 8:51:59 PM CST, Jonathan Hung wrote: Hi folks, Given the release of 2.10.0, and the fact that it's intended to be a bridge release to Hadoop 3.x [1], I'm proposing we make 2.10.x the last minor release line in branch-2. Currently, the main issue is that there's many fixes going into branch-2 (the theoretical 2.11.0) that's not going into branch-2.10 (which will become 2.10.1), so the fixes in branch-2 will likely never see the light of day unless they are backported to branch-2.10. To do this, I propose we: - Delete branch-2.10 - Rename branch-2 to branch-2.10 - Set version in the new branch-2.10 to 2.10.1-SNAPSHOT This way we get all the current branch-2 fixes into the 2.10.x release line. Then the commit chain will look like: trunk -> branch-3.2 -> branch-3.1 -> branch-2.10 -> branch-2.9 -> branch-2.8 Thoughts? Jonathan Hung [1] https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg29479.html - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2517) Immediately return this expression instead of assigning it to the temporary variable
Abhishek Purohit created HDDS-2517: -- Summary: Immediately return this expression instead of assigning it to the temporary variable Key: HDDS-2517 URL: https://issues.apache.org/jira/browse/HDDS-2517 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Abhishek Purohit Assignee: Abhishek Purohit Related to : [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV1=false] Immediately return this expression instead of assigning it to the temporary variable -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2516) Code cleanup in EventQueue
Attila Doroszlai created HDDS-2516: -- Summary: Code cleanup in EventQueue Key: HDDS-2516 URL: https://issues.apache.org/jira/browse/HDDS-2516 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Attila Doroszlai Assignee: Attila Doroszlai https://sonarcloud.io/project/issues?fileUuids=AW5md-HgKcVY8lQ4ZrfB=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2515) No need to call "toString()" method as formatting and string conversion is done by the Formatter
Abhishek Purohit created HDDS-2515: -- Summary: No need to call "toString()" method as formatting and string conversion is done by the Formatter Key: HDDS-2515 URL: https://issues.apache.org/jira/browse/HDDS-2515 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Abhishek Purohit Assignee: Abhishek Purohit [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV4=false] Class: XceiverClientGrpc {code:java} if (LOG.isDebugEnabled()) { LOG.debug("Nodes in pipeline : {}", pipeline.getNodes().toString()); {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2514) Remove this unused method parameter "encodedToken"
Abhishek Purohit created HDDS-2514: -- Summary: Remove this unused method parameter "encodedToken" Key: HDDS-2514 URL: https://issues.apache.org/jira/browse/HDDS-2514 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Abhishek Purohit Assignee: Abhishek Purohit Remove this unused method parameter "encodedToken" method: connectToDatanode Class: XceiverClientGrpc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2513) Remove this unused "COMPONENT" private field.
Abhishek Purohit created HDDS-2513: -- Summary: Remove this unused "COMPONENT" private field. Key: HDDS-2513 URL: https://issues.apache.org/jira/browse/HDDS-2513 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Abhishek Purohit Assignee: Abhishek Purohit Remove this unused "COMPONENT" private field in class XceiverClientGrpc [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsWG=false] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2512) Sonar TraceAllMethod NPE Could be Thrown
Matthew Sharp created HDDS-2512: --- Summary: Sonar TraceAllMethod NPE Could be Thrown Key: HDDS-2512 URL: https://issues.apache.org/jira/browse/HDDS-2512 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Matthew Sharp Assignee: Matthew Sharp Sonar cleanup: [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2WKcVY8lQ4ZsNQ=AW5md-2WKcVY8lQ4ZsNQ] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2483) Avoid fall-through in HddsUtils#getBlockID
[ https://issues.apache.org/jira/browse/HDDS-2483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia resolved HDDS-2483. - Fix Version/s: 0.5.0 Resolution: Fixed [~adoroszlai] Thanks for filing the issue and the reviews. [~ccondit] Thanks for the contribution. > Avoid fall-through in HddsUtils#getBlockID > -- > > Key: HDDS-2483 > URL: https://issues.apache.org/jira/browse/HDDS-2483 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Attila Doroszlai >Assignee: Craig Condit >Priority: Minor > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > {{switch}} in {{HddsUtils#getBlockID}} has potential fall-through. It should > be handled explicitly (eg. throw exception or {{return null}}). > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPe=AW5md-4qKcVY8lQ4ZsPe > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPf=AW5md-4qKcVY8lQ4ZsPf > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPg=AW5md-4qKcVY8lQ4ZsPg > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPh=AW5md-4qKcVY8lQ4ZsPh > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPi=AW5md-4qKcVY8lQ4ZsPi > * > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPj=AW5md-4qKcVY8lQ4ZsPj -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-1084) Ozone Recon Service v1
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aravindan Vijayan resolved HDDS-1084. - Resolution: Fixed Moved out all open JIRAs. They will be tracked under HDDS-1996. Resolving this JIRA. > Ozone Recon Service v1 > -- > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Recon >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: Ozone_Recon_Design_V1_Draft.pdf > > > Recon Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-14980) diskbalancer query command always tries to contact to port 9867
[ https://issues.apache.org/jira/browse/HDFS-14980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle resolved HDFS-14980. Resolution: Not A Problem This is an issue specific to a HDFS deployment using Cloudera Manager. The client side configuration in /etc/hadoop/conf/ (hdfs-site.xml), excludes all daemon configs and so DiskBalancerCli cannot resolve the {{dfs.datanode.ipc.address}}. If you add this to the configuration file, the query command works as expected. > diskbalancer query command always tries to contact to port 9867 > --- > > Key: HDFS-14980 > URL: https://issues.apache.org/jira/browse/HDFS-14980 > Project: Hadoop HDFS > Issue Type: Bug > Components: diskbalancer >Reporter: Nilotpal Nandi >Assignee: Siddharth Wagle >Priority: Major > > disbalancer query commands always tries to connect to port 9867 even when > datanode IPC port is different. > In this setup , datanode IPC port is set to 20001. > > diskbalancer report command works fine and connects to IPC port 20001 > > {noformat} > hdfs diskbalancer -report -node 172.27.131.193 > 19/11/12 08:58:55 INFO command.Command: Processing report command > 19/11/12 08:58:57 INFO balancer.KeyManager: Block token params received from > NN: update interval=10hrs, 0sec, token lifetime=10hrs, 0sec > 19/11/12 08:58:57 INFO block.BlockTokenSecretManager: Setting block keys > 19/11/12 08:58:57 INFO balancer.KeyManager: Update block keys every 2hrs, > 30mins, 0sec > 19/11/12 08:58:58 INFO command.Command: Reporting volume information for > DataNode(s). These DataNode(s) are parsed from '172.27.131.193'. > Processing report command > Reporting volume information for DataNode(s). These DataNode(s) are parsed > from '172.27.131.193'. > [172.27.131.193:20001] - : 3 > volumes with node data density 0.05. > [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK1/] - 0.15 used: > 39343871181/259692498944, 0.85 free: 220348627763/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > [DISK: volume-/dataroot/ycloud/dfs/NEW_DISK2/] - 0.15 used: > 39371179986/259692498944, 0.85 free: 220321318958/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > [DISK: volume-/dataroot/ycloud/dfs/dn/] - 0.19 used: > 49934903670/259692498944, 0.81 free: 209757595274/259692498944, isFailed: > False, isReadOnly: False, isSkip: False, isTransient: False. > > {noformat} > > But diskbalancer query command fails and tries to connect to port 9867 > (default port). > > {noformat} > hdfs diskbalancer -query 172.27.131.193 > 19/11/12 06:37:15 INFO command.Command: Executing "query plan" command. > 19/11/12 06:37:16 INFO ipc.Client: Retrying connect to server: > /172.27.131.193:9867. Already tried 0 time(s); retry policy is > RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 > MILLISECONDS) > 19/11/12 06:37:17 INFO ipc.Client: Retrying connect to server: > /172.27.131.193:9867. Already tried 1 time(s); retry policy is > RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 > MILLISECONDS) > .. > .. > .. > 19/11/12 06:37:25 ERROR tools.DiskBalancerCLI: Exception thrown while running > DiskBalancerCLI. > {noformat} > > > Expectation : > diskbalancer query command should work fine without explicitly mentioning > datanode IPC port address -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [DISCUSS] Making 2.10 the last minor 2.x release
I'm in support of this. The current scheme is confusing, and as you mentioned, is making the backport strategy less clear. It reminds me of the branch-2.8 vs. branch-2 (destined for 2.9) days when various fixes would make it into one or the other. One other action item would be to do a quick verification that the new branch-2.10 (current branch-2) has any fixes which were put into the current branch-2.10. It should be a superset of the changes that went into the two branches. Thanks for the proposal, Jonathan! Erik On Thu, Nov 14, 2019 at 11:26 PM Jonathan Hung wrote: > Some other additional items we would need: > >- Mark all fix-versions in YARN/HDFS/MAPREDUCE/HADOOP from 2.11.0 to >2.10.1 >- Remove 2.11.0 as a version in these projects > > > Jonathan Hung > > > On Thu, Nov 14, 2019 at 6:51 PM Jonathan Hung > wrote: > > > Hi folks, > > > > Given the release of 2.10.0, and the fact that it's intended to be a > > bridge release to Hadoop 3.x [1], I'm proposing we make 2.10.x the last > > minor release line in branch-2. Currently, the main issue is that there's > > many fixes going into branch-2 (the theoretical 2.11.0) that's not going > > into branch-2.10 (which will become 2.10.1), so the fixes in branch-2 > will > > likely never see the light of day unless they are backported to > branch-2.10. > > > > To do this, I propose we: > > > >- Delete branch-2.10 > >- Rename branch-2 to branch-2.10 > >- Set version in the new branch-2.10 to 2.10.1-SNAPSHOT > > > > This way we get all the current branch-2 fixes into the 2.10.x release > > line. Then the commit chain will look like: trunk -> branch-3.2 -> > > branch-3.1 -> branch-2.10 -> branch-2.9 -> branch-2.8 > > > > Thoughts? > > > > Jonathan Hung > > > > [1] > https://www.mail-archive.com/yarn-dev@hadoop.apache.org/msg29479.html > > >
[jira] [Resolved] (HDDS-2472) Use try-with-resources while creating FlushOptions in RDBStore.
[ https://issues.apache.org/jira/browse/HDDS-2472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-2472. -- Resolution: Fixed > Use try-with-resources while creating FlushOptions in RDBStore. > --- > > Key: HDDS-2472 > URL: https://issues.apache.org/jira/browse/HDDS-2472 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Affects Versions: 0.5.0 >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Link to the sonar issue flag - > https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zwKcVY8lQ4ZsJ4=AW5md-zwKcVY8lQ4ZsJ4. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2511) Sonar : Fix Sonar issues in OzoneManagerServiceProviderImpl
Aravindan Vijayan created HDDS-2511: --- Summary: Sonar : Fix Sonar issues in OzoneManagerServiceProviderImpl Key: HDDS-2511 URL: https://issues.apache.org/jira/browse/HDDS-2511 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Recon Reporter: Aravindan Vijayan Assignee: Aravindan Vijayan Fix For: 0.5.0 Link to the list of issues : https://sonarcloud.io/project/issues?fileUuids=AW5md-HdKcVY8lQ4ZrUn=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2510) Sonar : Use isEmpty() to check whether the collection is empty or not in Ozone Manager module
Aravindan Vijayan created HDDS-2510: --- Summary: Sonar : Use isEmpty() to check whether the collection is empty or not in Ozone Manager module Key: HDDS-2510 URL: https://issues.apache.org/jira/browse/HDDS-2510 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Manager Reporter: Aravindan Vijayan Assignee: Aravindan Vijayan -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2509) Code cleanup in replication package
Attila Doroszlai created HDDS-2509: -- Summary: Code cleanup in replication package Key: HDDS-2509 URL: https://issues.apache.org/jira/browse/HDDS-2509 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Attila Doroszlai Assignee: Attila Doroszlai Fix couple of [issues reported|https://sonarcloud.io/project/issues?directories=hadoop-hdds%2Fcontainer-service%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fhadoop%2Fozone%2Fcontainer%2Freplication%2Chadoop-hdds%2Fcontainer-service%2Fsrc%2Ftest%2Fjava%2Forg%2Fapache%2Fhadoop%2Fozone%2Fcontainer%2Freplication=hadoop-ozone=false] in {{org.apache.hadoop.ozone.container.replication}} package. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2508) Fix TestDeadNodeHandler
Attila Doroszlai created HDDS-2508: -- Summary: Fix TestDeadNodeHandler Key: HDDS-2508 URL: https://issues.apache.org/jira/browse/HDDS-2508 Project: Hadoop Distributed Data Store Issue Type: Bug Components: test Reporter: Attila Doroszlai {code} [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 63.647 s <<< FAILURE! - in org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler [ERROR] testOnMessage(org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler) Time elapsed: 63.562 s <<< ERROR! java.io.IOException: Could not allocate container. Cannot get any matching pipeline for Type:RATIS, Factor:THREE, State:PipelineState.OPEN at org.apache.hadoop.hdds.scm.container.ContainerStateManager.allocateContainer(ContainerStateManager.java:261) at org.apache.hadoop.hdds.scm.container.SCMContainerManager.allocateContainer(SCMContainerManager.java:255) at org.apache.hadoop.hdds.scm.TestUtils.allocateContainer(TestUtils.java:488) at org.apache.hadoop.hdds.scm.node.TestDeadNodeHandler.testOnMessage(TestDeadNodeHandler.java:154) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2495) Sonar - "notify" may not wake up the appropriate thread
[ https://issues.apache.org/jira/browse/HDDS-2495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek resolved HDDS-2495. --- Fix Version/s: 0.5.0 Resolution: Fixed > Sonar - "notify" may not wake up the appropriate thread > --- > > Key: HDDS-2495 > URL: https://issues.apache.org/jira/browse/HDDS-2495 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Matthew Sharp >Assignee: Matthew Sharp >Priority: Minor > Labels: pull-request-available, sonar > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Addresses same issue within ReplicationManager: > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-sVKcVY8lQ4ZsDi=AW5md-sVKcVY8lQ4ZsDi] > [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-sVKcVY8lQ4ZsDh=AW5md-sVKcVY8lQ4ZsDh] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2507) Remove the hard-coded exclusion of TestMiniChaosOzoneCluster
Marton Elek created HDDS-2507: - Summary: Remove the hard-coded exclusion of TestMiniChaosOzoneCluster Key: HDDS-2507 URL: https://issues.apache.org/jira/browse/HDDS-2507 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Marton Elek We excluded the execution of TestMiniChaosOzoneCluster from the hadoop-ozone/dev-support/checks/integration.sh because it was not stable enough. Unfortunately this exclusion makes it impossible to use custom exclusion lists (-Dsurefire.excludesFile=) as excludesFile can't be used if -Dtest=!... is already used. I propose to remove this exclusion to make it possible to use different exclusion for different runs (pr check, daily, etc.) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/ [Nov 14, 2019 10:44:30 PM] (jhung) YARN-7739. DefaultAMSProcessor should properly check customized resource [Nov 14, 2019 10:56:23 PM] (jhung) YARN-7541. Node updates don't update the maximum cluster capability for [Nov 14, 2019 11:47:24 PM] (jhung) YARN-8202. DefaultAMSProcessor should properly check units of requested -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.util.TestReadWriteDiskValidator hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.hdfs.TestDecommission hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.yarn.api.TestPBImplRecords hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/506/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K]
[jira] [Created] (HDDS-2506) Remove keyAllocationInfo and replication info from the auditLog
Marton Elek created HDDS-2506: - Summary: Remove keyAllocationInfo and replication info from the auditLog Key: HDDS-2506 URL: https://issues.apache.org/jira/browse/HDDS-2506 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Manager Reporter: Marton Elek During the review of HDDS-2470 I found that the full keyLocationInfo is added to the audit log for s3 operations: {code:java} 2019-11-15 12:34:18,538 | INFO | OMAudit | user=hadoop | ip=192.168.16.2 | op=ALLOCATE_KEY {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, dataSize=3813, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=[]} | ret=SUCCESS | 2019-11-15 12:34:20,576 | INFO | OMAudit | user=hadoop | ip=192.168.16.2 | op=ALLOCATE_KEY {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, dataSize=3813, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=[]} | ret=SUCCESS | 2019-11-15 12:34:20,626 | INFO | OMAudit | user=hadoop | ip=192.168.16.2 | op=ALLOCATE_BLOCK {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, dataSize=3813, replicationType=RATIS, replicationFactor=THREE, keyLocationInfo=[], clientID=103141950132977668} | ret=SUCCESS | 2019-11-15 12:34:51,705 | INFO | OMAudit | user=hadoop | ip=192.168.16.2 | op=COMMIT_MULTIPART_UPLOAD_PARTKEY {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, dataSize=3813, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=[blockID { containerBlockID { containerID: 1localID: 103141950135009280 } blockCommitSequenceId: 2}offset: 0length: 3813createVersion: 0pipeline { members {uuid: "eefe54e8-5723-458e-9204-207c6b97c9b3"ipAddress: "192.168.16.3" hostName: "ozones3_datanode_1.ozones3_default"ports { name: "RATIS" value: 9858}ports { name: "STANDALONE" value: 9859} networkName: "eefe54e8-5723-458e-9204-207c6b97c9b3"networkLocation: "/default-rack" } members {uuid: "ebf127d7-90a9-4f06-8fe5-a0c9c9adb743" ipAddress: "192.168.16.7"hostName: "ozones3_datanode_2.ozones3_default" ports { name: "RATIS" value: 9858}ports { name: "STANDALONE" value: 9859}networkName: "ebf127d7-90a9-4f06-8fe5-a0c9c9adb743"networkLocation: "/default-rack" } members {uuid: "9979c326-4982-4a4c-b34e-e70c1d825f5f"ipAddress: "192.168.16.6"hostName: "ozones3_datanode_3.ozones3_default"ports { name: "RATIS" value: 9858}ports { name: "STANDALONE" value: 9859}networkName: "9979c326-4982-4a4c-b34e-e70c1d825f5f" networkLocation: "/default-rack" } state: PIPELINE_OPEN type: RATIS factor: THREE id {id: "69ba305b-fe89-4f5c-97cd-b894d5ee8f2b" } leaderID: ""}], partNumber=1, partName=/s3b607288814a5da737a92fb067500396e/bucket1/key1103141950132977668} | ret=SUCCESS | 2019-11-15 12:42:10,883 | INFO | OMAudit | user=hadoop | ip=192.168.16.2 | op=COMPLETE_MULTIPART_UPLOAD {volume=s3b607288814a5da737a92fb067500396e, bucket=bucket1, key=key1, dataSize=0, replicationType=RATIS, replicationFactor=ONE, keyLocationInfo=[], multipartList=[partNumber: 1partName: "/s3b607288814a5da737a92fb067500396e/bucket1/key1103141950132977668"]} | ret=SUCCESS | {code} Including the full keyLocation info in the audit log may cause some problems: * It makes the the audit log slower * It makes harder to parse the audit log I think it's better to separate the debug log (which can be provided easily with ozone insight tool) from the audit log. Therefore I suggest to remove the keyLocationInfo, replicationType, replicationFactor from the aduit log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2505) Fix logic related to SCM address calculation in HddsUtils
Attila Doroszlai created HDDS-2505: -- Summary: Fix logic related to SCM address calculation in HddsUtils Key: HDDS-2505 URL: https://issues.apache.org/jira/browse/HDDS-2505 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Attila Doroszlai Assignee: Attila Doroszlai {{HddsUtils}} has 3 methods to calculate SCM address for various client types. All have an unreachable {{if}} branch, because: # {{iterator().next()}} throws exception for empty list # {{getSCMAddresses}} never returns empty list anyway, it throws exception * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPX=AW5md-4qKcVY8lQ4ZsPX * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPY=AW5md-4qKcVY8lQ4ZsPY * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPW=AW5md-4qKcVY8lQ4ZsPW Ideally code duplication among these methods should be reduced, too. * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPU=AW5md-4qKcVY8lQ4ZsPU * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPT=AW5md-4qKcVY8lQ4ZsPT * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-4qKcVY8lQ4ZsPV=AW5md-4qKcVY8lQ4ZsPV Complete list of issues in the same file: https://sonarcloud.io/project/issues?fileUuids=AW5md-HhKcVY8lQ4Zrjn=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2504) Handle InterruptedException properly
Attila Doroszlai created HDDS-2504: -- Summary: Handle InterruptedException properly Key: HDDS-2504 URL: https://issues.apache.org/jira/browse/HDDS-2504 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Attila Doroszlai bq. Either re-interrupt or rethrow the {{InterruptedException}} in several files (39 issues) https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG Feel free to create sub-tasks if needed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2482) Enable github actions for pull requests
[ https://issues.apache.org/jira/browse/HDDS-2482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek resolved HDDS-2482. --- Fix Version/s: 0.5.0 Resolution: Fixed > Enable github actions for pull requests > --- > > Key: HDDS-2482 > URL: https://issues.apache.org/jira/browse/HDDS-2482 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 10m > Remaining Estimate: 0h > > HDDS-2400 introduced a github actions workflow for each "push" event. It > turned out that pushing to a forked repository doesn't trigger this event > even if it's part of a PR. > > We need to enable the execution for pull_request events: > References: > > [https://github.community/t5/GitHub-Actions/Run-a-GitHub-action-on-pull-request-for-PR-opened-from-a-forked/m-p/31147#M690] > [https://help.github.com/en/actions/automating-your-workflow-with-github-actions/events-that-trigger-workflows#pull-request-events-for-forked-repositories] > {noformat} > Note: By default, a workflow only runs when a pull_request's activity type is > opened, synchronize, or reopened. To trigger workflows for more activity > types, use the types keyword.{noformat} > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-14990) HDFS: No symbolic icon to represent decommissioning state of datanode in Name node WEB UI
Souryakanta Dwivedy created HDFS-14990: -- Summary: HDFS: No symbolic icon to represent decommissioning state of datanode in Name node WEB UI Key: HDFS-14990 URL: https://issues.apache.org/jira/browse/HDFS-14990 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs, ui Affects Versions: 3.2.1 Reporter: Souryakanta Dwivedy Attachments: image-2019-11-15-17-31-23-213.png No symbolic icon to represent decommissioning state of datanode in Name node WEB UI Expected output:- Like other datanode states as In-service , Down , Decommissioned etc. an icon should also be added for decommissioning state !image-2019-11-15-17-31-23-213.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2503) Close FlushOptions in RDBStore
Attila Doroszlai created HDDS-2503: -- Summary: Close FlushOptions in RDBStore Key: HDDS-2503 URL: https://issues.apache.org/jira/browse/HDDS-2503 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Attila Doroszlai {{FlushOptions}} should be closed after use. * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zwKcVY8lQ4ZsJ4=AW5md-zwKcVY8lQ4ZsJ4 * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zwKcVY8lQ4ZsJ5=AW5md-zwKcVY8lQ4ZsJ5 Sonar also reported 15 further issues in the same file: https://sonarcloud.io/project/issues?fileUuids=AW5md-HgKcVY8lQ4Zrga=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2502) Close ScmClient in RatisInsight
Attila Doroszlai created HDDS-2502: -- Summary: Close ScmClient in RatisInsight Key: HDDS-2502 URL: https://issues.apache.org/jira/browse/HDDS-2502 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Attila Doroszlai {{ScmClient}} in {{RatisInsight}} should be closed after use. https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mYKcVY8lQ4Zr_s=AW5md-mYKcVY8lQ4Zr_s Also two other minor issues reported in the same file: https://sonarcloud.io/project/issues?fileUuids=AW5md-HeKcVY8lQ4ZrXL=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2492) Fix test clean up issue in TestSCMPipelineManager
[ https://issues.apache.org/jira/browse/HDDS-2492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Li Cheng resolved HDDS-2492. Fix Version/s: 0.4.1 Resolution: Fixed > Fix test clean up issue in TestSCMPipelineManager > - > > Key: HDDS-2492 > URL: https://issues.apache.org/jira/browse/HDDS-2492 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Sammi Chen >Assignee: Li Cheng >Priority: Major > Labels: pull-request-available > Fix For: 0.4.1 > > Time Spent: 10m > Remaining Estimate: 0h > > This was opened based on [~sammichen]'s investigation on HDDS-2034. > > {quote}Failure is caused by newly introduced function > TestSCMPipelineManager#testPipelineOpenOnlyWhenLeaderReported which doesn't > close pipelineManager at the end. It's better to fix it in a new JIRA. > {quote} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2501) Ensure stream is closed in ObjectEndpoint
Attila Doroszlai created HDDS-2501: -- Summary: Ensure stream is closed in ObjectEndpoint Key: HDDS-2501 URL: https://issues.apache.org/jira/browse/HDDS-2501 Project: Hadoop Distributed Data Store Issue Type: Bug Components: S3 Reporter: Attila Doroszlai Ensure {{ObjectOutputStream}} is closed in {{ObjectEndpoint}}: https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-j-KcVY8lQ4Zr96=AW5md-j-KcVY8lQ4Zr96 And fix other issues in the same file: https://sonarcloud.io/project/issues?fileUuids=AW5md-HdKcVY8lQ4ZrVc=hadoop-ozone=false -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-2500) Avoid fall-through in CloseContainerCommandHandler
Attila Doroszlai created HDDS-2500: -- Summary: Avoid fall-through in CloseContainerCommandHandler Key: HDDS-2500 URL: https://issues.apache.org/jira/browse/HDDS-2500 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Ozone Datanode Reporter: Attila Doroszlai Two instances of fall-through: * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7UKcVY8lQ4ZsRk=AW5md-7UKcVY8lQ4ZsRk * https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7UKcVY8lQ4ZsRj=AW5md-7UKcVY8lQ4ZsRj Both seem OK, but unnecessary (the next branch is {{break}}-only). Could be made more explicit by moving/adding {{break}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org