[jira] [Updated] (HDFS-15926) hadoop-annotations is duplicated in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HDFS-15926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HDFS-15926: Priority: Minor (was: Major) > hadoop-annotations is duplicated in hadoop-hdfs > --- > > Key: HDFS-15926 > URL: https://issues.apache.org/jira/browse/HDFS-15926 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > hadoop-annotations is duplicated dependency in hadoop-hdfs as it is also > declared in parent hadoop-project-dist pom. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15926) hadoop-annotations is duplicated in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HDFS-15926?focusedWorklogId=573316=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573316 ] ASF GitHub Bot logged work on HDFS-15926: - Author: ASF GitHub Bot Created on: 29/Mar/21 05:31 Start Date: 29/Mar/21 05:31 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2823: URL: https://github.com/apache/hadoop/pull/2823#issuecomment-809078398 Sure, Thanks @ayushtkn . I just scanned `hadoop-common` as well as other poms with parent defined as `hadoop-project-dist` and no one other than `hadoop-hdfs` has this dependency defined as `provided` scope, hence it is duplicate only for `hadoop-hdfs` (with same `provided` scope as defined in parent pom). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573316) Time Spent: 1h 10m (was: 1h) > hadoop-annotations is duplicated in hadoop-hdfs > --- > > Key: HDFS-15926 > URL: https://issues.apache.org/jira/browse/HDFS-15926 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > hadoop-annotations is duplicated dependency in hadoop-hdfs as it is also > declared in parent hadoop-project-dist pom. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
[ https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=573309=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573309 ] ASF GitHub Bot logged work on HDFS-15900: - Author: ASF GitHub Bot Created on: 29/Mar/21 03:43 Start Date: 29/Mar/21 03:43 Worklog Time Spent: 10m Work Description: hdaikoku commented on pull request #2787: URL: https://github.com/apache/hadoop/pull/2787#issuecomment-809044463 Thank you all for the review! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573309) Time Spent: 4h (was: 3h 50m) > RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode > --- > > Key: HDFS-15900 > URL: https://issues.apache.org/jira/browse/HDFS-15900 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.3.0 >Reporter: Harunobu Daikoku >Assignee: Harunobu Daikoku >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: image.png > > Time Spent: 4h > Remaining Estimate: 0h > > We observed that when a NameNode becomes UNAVAILABLE, the corresponding > blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter > unintentionally sets to empty, its initial value. > !image.png|height=250! > As a result of this, concat operations through dfsrouter fail with the > following error as it cannot resolve the block id in the recognized active > namespaces. > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): > Cannot locate a nameservice for block pool BP-... > {noformat} > A possible fix is to ignore UNAVAILABLE NameNode registrations, and set > proper namespace information obtained from available NameNode registrations > when constructing the cache of active namespaces. > > [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
[ https://issues.apache.org/jira/browse/HDFS-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HDFS-15900: Fix Version/s: 3.2.3 3.1.5 3.4.0 3.3.1 Resolution: Fixed Status: Resolved (was: Patch Available) > RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode > --- > > Key: HDFS-15900 > URL: https://issues.apache.org/jira/browse/HDFS-15900 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.3.0 >Reporter: Harunobu Daikoku >Assignee: Harunobu Daikoku >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3 > > Attachments: image.png > > Time Spent: 3h 50m > Remaining Estimate: 0h > > We observed that when a NameNode becomes UNAVAILABLE, the corresponding > blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter > unintentionally sets to empty, its initial value. > !image.png|height=250! > As a result of this, concat operations through dfsrouter fail with the > following error as it cannot resolve the block id in the recognized active > namespaces. > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): > Cannot locate a nameservice for block pool BP-... > {noformat} > A possible fix is to ignore UNAVAILABLE NameNode registrations, and set > proper namespace information obtained from available NameNode registrations > when constructing the cache of active namespaces. > > [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15930) Fix some @param errors in DirectoryScanner.
[ https://issues.apache.org/jira/browse/HDFS-15930?focusedWorklogId=573300=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573300 ] ASF GitHub Bot logged work on HDFS-15930: - Author: ASF GitHub Bot Created on: 29/Mar/21 02:47 Start Date: 29/Mar/21 02:47 Worklog Time Spent: 10m Work Description: qizhu-lucas commented on pull request #2829: URL: https://github.com/apache/hadoop/pull/2829#issuecomment-809028204 @ferhui @Hexiaoqiao @ayushtkn There are some @param errors fixed in this pr. Could you help review this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573300) Time Spent: 20m (was: 10m) > Fix some @param errors in DirectoryScanner. > --- > > Key: HDFS-15930 > URL: https://issues.apache.org/jira/browse/HDFS-15930 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15930) Fix some @param errors in DirectoryScanner.
[ https://issues.apache.org/jira/browse/HDFS-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-15930: -- Labels: pull-request-available (was: ) > Fix some @param errors in DirectoryScanner. > --- > > Key: HDFS-15930 > URL: https://issues.apache.org/jira/browse/HDFS-15930 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15930) Fix some @param errors in DirectoryScanner.
[ https://issues.apache.org/jira/browse/HDFS-15930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Qi Zhu updated HDFS-15930: -- Status: Patch Available (was: Open) > Fix some @param errors in DirectoryScanner. > --- > > Key: HDFS-15930 > URL: https://issues.apache.org/jira/browse/HDFS-15930 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15930) Fix some @param errors in DirectoryScanner.
[ https://issues.apache.org/jira/browse/HDFS-15930?focusedWorklogId=573299=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573299 ] ASF GitHub Bot logged work on HDFS-15930: - Author: ASF GitHub Bot Created on: 29/Mar/21 02:46 Start Date: 29/Mar/21 02:46 Worklog Time Spent: 10m Work Description: qizhu-lucas opened a new pull request #2829: URL: https://github.com/apache/hadoop/pull/2829 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573299) Remaining Estimate: 0h Time Spent: 10m > Fix some @param errors in DirectoryScanner. > --- > > Key: HDFS-15930 > URL: https://issues.apache.org/jira/browse/HDFS-15930 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Qi Zhu >Assignee: Qi Zhu >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
[ https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=573298=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573298 ] ASF GitHub Bot logged work on HDFS-15900: - Author: ASF GitHub Bot Created on: 29/Mar/21 02:45 Start Date: 29/Mar/21 02:45 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2787: URL: https://github.com/apache/hadoop/pull/2787#issuecomment-809027425 Merged to trunk. Thanks for your contribution and discussion, @hdaikoku and @aajisaka. Thanks for your review, @goiri. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573298) Time Spent: 3h 50m (was: 3h 40m) > RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode > --- > > Key: HDFS-15900 > URL: https://issues.apache.org/jira/browse/HDFS-15900 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.3.0 >Reporter: Harunobu Daikoku >Assignee: Harunobu Daikoku >Priority: Major > Labels: pull-request-available > Attachments: image.png > > Time Spent: 3h 50m > Remaining Estimate: 0h > > We observed that when a NameNode becomes UNAVAILABLE, the corresponding > blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter > unintentionally sets to empty, its initial value. > !image.png|height=250! > As a result of this, concat operations through dfsrouter fail with the > following error as it cannot resolve the block id in the recognized active > namespaces. > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): > Cannot locate a nameservice for block pool BP-... > {noformat} > A possible fix is to ignore UNAVAILABLE NameNode registrations, and set > proper namespace information obtained from available NameNode registrations > when constructing the cache of active namespaces. > > [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15930) Fix some @param errors in DirectoryScanner.
Qi Zhu created HDFS-15930: - Summary: Fix some @param errors in DirectoryScanner. Key: HDFS-15930 URL: https://issues.apache.org/jira/browse/HDFS-15930 Project: Hadoop HDFS Issue Type: Bug Reporter: Qi Zhu Assignee: Qi Zhu -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
[ https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=573297=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573297 ] ASF GitHub Bot logged work on HDFS-15900: - Author: ASF GitHub Bot Created on: 29/Mar/21 02:43 Start Date: 29/Mar/21 02:43 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #2787: URL: https://github.com/apache/hadoop/pull/2787 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573297) Time Spent: 3h 40m (was: 3.5h) > RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode > --- > > Key: HDFS-15900 > URL: https://issues.apache.org/jira/browse/HDFS-15900 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.3.0 >Reporter: Harunobu Daikoku >Assignee: Harunobu Daikoku >Priority: Major > Labels: pull-request-available > Attachments: image.png > > Time Spent: 3h 40m > Remaining Estimate: 0h > > We observed that when a NameNode becomes UNAVAILABLE, the corresponding > blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter > unintentionally sets to empty, its initial value. > !image.png|height=250! > As a result of this, concat operations through dfsrouter fail with the > following error as it cannot resolve the block id in the recognized active > namespaces. > {noformat} > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): > Cannot locate a nameservice for block pool BP-... > {noformat} > A possible fix is to ignore UNAVAILABLE NameNode registrations, and set > proper namespace information obtained from available NameNode registrations > when constructing the cache of active namespaces. > > [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15925) The lack of packet-level mirrorError state synchronization in BlockReceiver$PacketResponder can cause the HDFS client to hang
[ https://issues.apache.org/jira/browse/HDFS-15925?focusedWorklogId=573285=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573285 ] ASF GitHub Bot logged work on HDFS-15925: - Author: ASF GitHub Bot Created on: 29/Mar/21 01:48 Start Date: 29/Mar/21 01:48 Worklog Time Spent: 10m Work Description: ferhui commented on pull request #2821: URL: https://github.com/apache/hadoop/pull/2821#issuecomment-809011000 @functioner Thanks for report and fix. Could you please add a test case? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573285) Time Spent: 0.5h (was: 20m) > The lack of packet-level mirrorError state synchronization in > BlockReceiver$PacketResponder can cause the HDFS client to hang > - > > Key: HDFS-15925 > URL: https://issues.apache.org/jira/browse/HDFS-15925 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.2.2 >Reporter: Haoze Wu >Priority: Critical > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > When the datanode is receiving data block packets from a HDFS client and > forwarding these packets to a mirror (another datanode) simultaneously, a > single IOException in the datanode’s forwarding path can cause the client to > get stuck for 1 min, without any logging. After 1 min, the client’s log shows > a warning of EOFException and `Slow waitForAckedSeqno took 60106ms > (threshold=3ms)`. > Normally the datanode will inform the client of this error state > immediately, and then the client will resend the packets immediately. The > whole process is very fast. After careful analyses, we find the above symptom > is due to the lack of packet-level mirrorError state synchronization in > BlockReceiver$PacketResponder: in some concurrency condition, the > BlockReceiver$PacketResponder will hang for 1 min and then exit, without > sending the error state to the client. > *Root Cause Analysis* > {code:java} > //hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java > class BlockReceiver implements Closeable { > // ... > private void handleMirrorOutError(IOException ioe) throws IOException { > // ... > if (Thread.interrupted()) { > throw ioe; > } else { // encounter an error while writing to mirror > // continue to run even if can not write to mirror > // notify client of the error > // and wait for the client to shut down the pipeline > mirrorError = true;// line > 461 > } > } > private int receivePacket() throws IOException { > // read the next packet > packetReceiver.receiveNextPacket(in);// line > 528 > // ... > boolean lastPacketInBlock = header.isLastPacketInBlock();// line > 551 > //First write the packet to the mirror: > if (mirrorOut != null && !mirrorError) { > try { > // ... > packetReceiver.mirrorPacketTo(mirrorOut);// line > 588 > // ... > } catch (IOException e) { > handleMirrorOutError(e); // line > 604 > } > } > // ... > return lastPacketInBlock?-1:len; // line > 849 > } > void receiveBlock(...) throws IOException { > // ... > try { > if (isClient && !isTransfer) { > responder = new Daemon(datanode.threadGroup, > new PacketResponder(replyOut, mirrIn, downstreams)); > responder.start(); // line > 968 > } > while(receivePacket() >= 0){/*Receive until the last packet*/} // line > 971 > // wait for all outstanding packet responses. And then > // indicate responder to gracefully shutdown. > // Mark that responder has been closed for future processing > if (responder != null) { > ((PacketResponder)responder.getRunnable()).close(); // line > 977 > responderClosed = true; > } > // ... > } catch (IOException ioe) { // line > 1003 > // ... > } finally { > // ... > if (!responderClosed) { // Data transfer was not complete. > if (responder != null) { > // ... >
[jira] [Resolved] (HDFS-8506) List of 33 Unstable tests on branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-8506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HDFS-8506. - Resolution: Won't Fix branch-2.7 is EoL. Closing as won't fix. > List of 33 Unstable tests on branch-2.7 > --- > > Key: HDFS-8506 > URL: https://issues.apache.org/jira/browse/HDFS-8506 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 2.7.1 > Environment: Ubuntu / x86_64 > Hadoop branch-2.7 (source code of Monday May 26th) >Reporter: Tony Reix >Priority: Major > Attachments: Unstable-branch-2.7-May26th, > UnstableDetails-branch-2.7-May26th > > > On my Ubuntu / x86_64 machine, configured for Hadoop since months, I've run > Hadoop tests of branch branch-2.7 (source code of Monday May 26th) during > days. It produced 14 runs in the EXACT same environment. And it shows that > several tests sometimes fail, randomly. > 12 runs gave the exact same number of tests done and tests skipped: > - 10977 tests > - 254 skipped > 1 test gave only 10972 tests. Another gave only 9760 tests. > I've used the 12 runs with 10977 tests for building the attached result file, > which shows that 33 tests sometimes fail. > T: Tests > F: Failures > E: Errors > S: Skipped > NN/n : Number of times the issue appeared out of the 12 runs > m-M: minimum number of failure up to Maximum number of failures. > Example: > T F > E S | NN/n > -- > cli.TestHDFSCLI10-1 0 0 > | 11/12 > hdfs.TestAppendSnapshotTruncate 1 00-1 0 | 1/12 > ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15926) hadoop-annotations is duplicated in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HDFS-15926?focusedWorklogId=573255=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573255 ] ASF GitHub Bot logged work on HDFS-15926: - Author: ASF GitHub Bot Created on: 28/Mar/21 20:34 Start Date: 28/Mar/21 20:34 Worklog Time Spent: 10m Work Description: ayushtkn commented on pull request #2823: URL: https://github.com/apache/hadoop/pull/2823#issuecomment-808955170 Thanx @virajjasani , Same case in `hadoop-common` as well? Can you check if there is similar case anywhere else as well, we can remove the redundancy in one go. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573255) Time Spent: 1h (was: 50m) > hadoop-annotations is duplicated in hadoop-hdfs > --- > > Key: HDFS-15926 > URL: https://issues.apache.org/jira/browse/HDFS-15926 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > hadoop-annotations is duplicated dependency in hadoop-hdfs as it is also > declared in parent hadoop-project-dist pom. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc
[ https://issues.apache.org/jira/browse/HDFS-15929?focusedWorklogId=573180=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573180 ] ASF GitHub Bot logged work on HDFS-15929: - Author: ASF GitHub Bot Created on: 28/Mar/21 13:33 Start Date: 28/Mar/21 13:33 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2826: URL: https://github.com/apache/hadoop/pull/2826#issuecomment-808897944 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 37s | | trunk passed | | +1 :green_heart: | compile | 2m 39s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 45s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 53m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 34s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 2m 34s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/3/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 7 new + 47 unchanged - 13 fixed = 54 total (was 60) | | +1 :green_heart: | golang | 2m 34s | | the patch passed | | +1 :green_heart: | javac | 2m 34s | | the patch passed | | +1 :green_heart: | compile | 2m 34s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | cc | 2m 34s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/3/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 6 new + 48 unchanged - 12 fixed = 54 total (was 60) | | +1 :green_heart: | golang | 2m 34s | | the patch passed | | +1 :green_heart: | javac | 2m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 50s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 41m 18s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 117m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2826 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 320e2a3702eb 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 26ab61c5c03a3551bc0659a6b4d257d0cc4f1514 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions |
[jira] [Work logged] (HDFS-15922) Use memcpy for copying non-null terminated string in jni_helper.c
[ https://issues.apache.org/jira/browse/HDFS-15922?focusedWorklogId=573178=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573178 ] ASF GitHub Bot logged work on HDFS-15922: - Author: ASF GitHub Bot Created on: 28/Mar/21 13:20 Start Date: 28/Mar/21 13:20 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2827: URL: https://github.com/apache/hadoop/pull/2827#issuecomment-808896291 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 25s | | trunk passed | | +1 :green_heart: | compile | 2m 47s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 47s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 53m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 2m 34s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | cc | 2m 34s | | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 0 new + 52 unchanged - 8 fixed = 52 total (was 60) | | +1 :green_heart: | golang | 2m 34s | | the patch passed | | +1 :green_heart: | javac | 2m 34s | | the patch passed | | +1 :green_heart: | compile | 2m 35s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | cc | 2m 35s | | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 0 new + 52 unchanged - 8 fixed = 52 total (was 60) | | +1 :green_heart: | golang | 2m 35s | | the patch passed | | +1 :green_heart: | javac | 2m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 41m 44s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 117m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2827/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2827 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux f525e1fcc389 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 350ea2f620f47e2652d52e1a72f9b477fb358e4c | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2827/1/testReport/ | | Max. process+thread count | 717 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2827/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was
[jira] [Commented] (HDFS-15863) RBF: Validation message to be corrected in FairnessPolicyController
[ https://issues.apache.org/jira/browse/HDFS-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310180#comment-17310180 ] Hadoop QA commented on HDFS-15863: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Logfile || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 23m 14s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color} | {color:green}test4tests{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 3s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green}{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 56s{color} | {color:green}{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green}{color} | {color:green} trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 18m 37s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are enabled, using SpotBugs. {color} | | {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 1m 14s{color} | {color:green}{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green}{color} | {color:green} the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green}{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 58s{color} | {color:green}{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green}{color} | {color:green} the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green}{color} | {color:green} the patch passed
[jira] [Work logged] (HDFS-15922) Use memcpy for copying non-null terminated string in jni_helper.c
[ https://issues.apache.org/jira/browse/HDFS-15922?focusedWorklogId=573165=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573165 ] ASF GitHub Bot logged work on HDFS-15922: - Author: ASF GitHub Bot Created on: 28/Mar/21 11:21 Start Date: 28/Mar/21 11:21 Worklog Time Spent: 10m Work Description: GauthamBanasandra opened a new pull request #2827: URL: https://github.com/apache/hadoop/pull/2827 * strncpy reports a warning if the destination string isn't null terminated. Since we append a custom character at the end ourselves, the warning reported by stnrcpy is valid, but not relevant. * Thus, we use memcpy to avoid this warning. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573165) Time Spent: 1h (was: 50m) > Use memcpy for copying non-null terminated string in jni_helper.c > - > > Key: HDFS-15922 > URL: https://issues.apache.org/jira/browse/HDFS-15922 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h > Remaining Estimate: 0h > > We currently get a warning while compiling HDFS native client - > {code} > [WARNING] inlined from 'wildcard_expandPath' at > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c:427:21, > [WARNING] /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: > warning: '__builtin_strncpy' output truncated before terminating nul copying > as many bytes from a string as its length [-Wstringop-truncation] > [WARNING] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c:402:43: > note: length computed here > {code} > The scenario here is such that the copied string is deliberately not null > terminated, since we want to insert a PATH_SEPARATOR ourselves. The warning > reported by strncpy is valid, but not applicable in this scenario. Thus, we > need to use memcpy which doesn't mind if the string is null terminated or not. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc
[ https://issues.apache.org/jira/browse/HDFS-15929?focusedWorklogId=573163=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573163 ] ASF GitHub Bot logged work on HDFS-15929: - Author: ASF GitHub Bot Created on: 28/Mar/21 11:15 Start Date: 28/Mar/21 11:15 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2826: URL: https://github.com/apache/hadoop/pull/2826#issuecomment-808882071 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 32s | | trunk passed | | +1 :green_heart: | compile | 2m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 44s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 53m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 16s | | the patch passed | | +1 :green_heart: | compile | 2m 36s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 2m 36s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/2/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 3 new + 51 unchanged - 9 fixed = 54 total (was 60) | | +1 :green_heart: | golang | 2m 36s | | the patch passed | | +1 :green_heart: | javac | 2m 36s | | the patch passed | | +1 :green_heart: | compile | 2m 33s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | cc | 2m 33s | | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 0 new + 54 unchanged - 6 fixed = 54 total (was 60) | | +1 :green_heart: | golang | 2m 33s | | the patch passed | | +1 :green_heart: | javac | 2m 33s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/2/artifact/out/blanks-tabs.txt) | The patch 2 line(s) with tabs. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 46m 33s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 122m 17s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2826 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux f03230b8c966 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ad532f6f47643f7270c089dd31a9b03b265db987 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/2/testReport/ | | Max.
[jira] [Work logged] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc
[ https://issues.apache.org/jira/browse/HDFS-15929?focusedWorklogId=573162=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573162 ] ASF GitHub Bot logged work on HDFS-15929: - Author: ASF GitHub Bot Created on: 28/Mar/21 11:11 Start Date: 28/Mar/21 11:11 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2826: URL: https://github.com/apache/hadoop/pull/2826#issuecomment-808881629 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 30s | | trunk passed | | +1 :green_heart: | compile | 2m 41s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 2m 42s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 53m 7s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 30s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | -1 :x: | cc | 2m 30s | [/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/1/artifact/out/results-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 3 new + 51 unchanged - 9 fixed = 54 total (was 60) | | +1 :green_heart: | golang | 2m 30s | | the patch passed | | +1 :green_heart: | javac | 2m 30s | | the patch passed | | +1 :green_heart: | compile | 2m 33s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | cc | 2m 33s | | hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 0 new + 54 unchanged - 6 fixed = 54 total (was 60) | | +1 :green_heart: | golang | 2m 33s | | the patch passed | | +1 :green_heart: | javac | 2m 33s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/1/artifact/out/blanks-tabs.txt) | The patch 2 line(s) with tabs. | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 47m 19s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 123m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2826 | | Optional Tests | dupname asflicense compile cc mvnsite javac unit codespell golang | | uname | Linux 2f0bd0e78023 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 34533fdaf0132c638cc2ed6938cf86bd0aa34a6b | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2826/1/testReport/ | | Max.
[jira] [Updated] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc
[ https://issues.apache.org/jira/browse/HDFS-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gautham Banasandra updated HDFS-15929: -- Description: RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following warning during compilation that it's deprecated - {code} /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc: In function ‘std::string hdfs::GetRandomClientName()’: /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc:78:31: warning: ‘int RAND_pseudo_bytes(unsigned char*, int)’ is deprecated [-Wdeprecated-declarations] 78 | RAND_pseudo_bytes([0], 8); | ^ In file included from /usr/include/openssl/e_os2.h:13, from /usr/include/openssl/ossl_typ.h:19, from /usr/include/openssl/rand.h:14, from /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.h:29, from /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc:19: /usr/include/openssl/rand.h:44:1: note: declared here 44 | DEPRECATEDIN_1_1_0(int RAND_pseudo_bytes(unsigned char *buf, int num)) | ^~ /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc:78:31: warning: ‘int RAND_pseudo_bytes(unsigned char*, int)’ is deprecated [-Wdeprecated-declarations] 78 | RAND_pseudo_bytes([0], 8); | ^ {code} was: RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following warning during compilation that it's deprecated - {code} [WARNING] /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc: warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated [-Wdeprecated-declarations] [WARNING] from /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc [WARNING] /usr/include/openssl/rand.h:44:1: note: declared here {code} > Replace RAND_pseudo_bytes in util.cc > > > Key: HDFS-15929 > URL: https://issues.apache.org/jira/browse/HDFS-15929 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Critical > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following > warning during compilation that it's deprecated - > {code} > /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc: > In function ‘std::string hdfs::GetRandomClientName()’: > /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc:78:31: > warning: ‘int RAND_pseudo_bytes(unsigned char*, int)’ is deprecated > [-Wdeprecated-declarations] >78 | RAND_pseudo_bytes([0], 8); > | ^ > In file included from /usr/include/openssl/e_os2.h:13, > from /usr/include/openssl/ossl_typ.h:19, > from /usr/include/openssl/rand.h:14, > from > /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.h:29, > from > /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc:19: > /usr/include/openssl/rand.h:44:1: note: declared here >44 | DEPRECATEDIN_1_1_0(int RAND_pseudo_bytes(unsigned char *buf, int num)) > | ^~ > /mnt/c/Users/Gautham/projects/apache/wsl/hadoop/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc:78:31: > warning: ‘int RAND_pseudo_bytes(unsigned char*, int)’ is deprecated > [-Wdeprecated-declarations] >78 | RAND_pseudo_bytes([0], 8); > | ^ > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For
[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible
[ https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310161#comment-17310161 ] Ayush Saxena commented on HDFS-15764: - Committed to trunk. Thanx Everyone!!! > Notify Namenode missing or new block on disk as soon as possible > > > Key: HDFS-15764 > URL: https://issues.apache.org/jira/browse/HDFS-15764 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, > HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, > HDFS-15764.006.patch, HDFS-15764.007.patch > > > When a bock file is deleted on disk or copied back to the disk, the > DirectoryScanner can find the change, but the namenode know the change only > untill the next full report. And in big cluster the period of full report is > set to long time invterval. > Call notifyNamenodeDeletedBlock if block files are deleted and call > notifyNamenodeReceivedBlock if the block files is found agian. So the > Incremental block report can send the change to namenode in next heartbeat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible
[ https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15764: Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Notify Namenode missing or new block on disk as soon as possible > > > Key: HDFS-15764 > URL: https://issues.apache.org/jira/browse/HDFS-15764 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Fix For: 3.4.0 > > Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, > HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, > HDFS-15764.006.patch, HDFS-15764.007.patch > > > When a bock file is deleted on disk or copied back to the disk, the > DirectoryScanner can find the change, but the namenode know the change only > untill the next full report. And in big cluster the period of full report is > set to long time invterval. > Call notifyNamenodeDeletedBlock if block files are deleted and call > notifyNamenodeReceivedBlock if the block files is found agian. So the > Incremental block report can send the change to namenode in next heartbeat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15764) Notify Namenode missing or new block on disk as soon as possible
[ https://issues.apache.org/jira/browse/HDFS-15764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310159#comment-17310159 ] Ayush Saxena commented on HDFS-15764: - v007 LGTM +1 > Notify Namenode missing or new block on disk as soon as possible > > > Key: HDFS-15764 > URL: https://issues.apache.org/jira/browse/HDFS-15764 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15764.001.patch, HDFS-15764.002.patch, > HDFS-15764.003.patch, HDFS-15764.004.patch, HDFS-15764.005.patch, > HDFS-15764.006.patch, HDFS-15764.007.patch > > > When a bock file is deleted on disk or copied back to the disk, the > DirectoryScanner can find the change, but the namenode know the change only > untill the next full report. And in big cluster the period of full report is > set to long time invterval. > Call notifyNamenodeDeletedBlock if block files are deleted and call > notifyNamenodeReceivedBlock if the block files is found agian. So the > Incremental block report can send the change to namenode in next heartbeat. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15863) RBF: Validation message to be corrected in FairnessPolicyController
[ https://issues.apache.org/jira/browse/HDFS-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310156#comment-17310156 ] Surendra Singh Lilhore edited comment on HDFS-15863 at 3/28/21, 10:18 AM: -- +1 for v5. Triggered build : https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/559/ was (Author: surendrasingh): +1 for v5. > RBF: Validation message to be corrected in FairnessPolicyController > --- > > Key: HDFS-15863 > URL: https://issues.apache.org/jira/browse/HDFS-15863 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Renukaprasad C >Assignee: Renukaprasad C >Priority: Minor > Attachments: HDFS-15863.001.patch, HDFS-15863.002.patch, > HDFS-15863.003.patch, HDFS-15863.004.patch, HDFS-15863.005.patch > > > org.apache.hadoop.hdfs.server.federation.fairness.StaticRouterRpcFairnessPolicyController#validateCount > When dfs.federation.router.handler.count is lessthan the total dedicated > handlers for all NS, then error message shows 0 & -ve values in error > message, instead can show the actual configured values. > Current message is : "Available handlers -5 lower than min 0 for nsId nn1" > This can be changed to: "Configured handlers > ${DFS_ROUTER_HANDLER_COUNT_KEY}=10 lower than min 15 for nsId nn1", where 10 > is hander count & 15 is sum of dedicated handler count. > Related to: HDFS-14090 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15863) RBF: Validation message to be corrected in FairnessPolicyController
[ https://issues.apache.org/jira/browse/HDFS-15863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17310156#comment-17310156 ] Surendra Singh Lilhore commented on HDFS-15863: --- +1 for v5. > RBF: Validation message to be corrected in FairnessPolicyController > --- > > Key: HDFS-15863 > URL: https://issues.apache.org/jira/browse/HDFS-15863 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.4.0 >Reporter: Renukaprasad C >Assignee: Renukaprasad C >Priority: Minor > Attachments: HDFS-15863.001.patch, HDFS-15863.002.patch, > HDFS-15863.003.patch, HDFS-15863.004.patch, HDFS-15863.005.patch > > > org.apache.hadoop.hdfs.server.federation.fairness.StaticRouterRpcFairnessPolicyController#validateCount > When dfs.federation.router.handler.count is lessthan the total dedicated > handlers for all NS, then error message shows 0 & -ve values in error > message, instead can show the actual configured values. > Current message is : "Available handlers -5 lower than min 0 for nsId nn1" > This can be changed to: "Configured handlers > ${DFS_ROUTER_HANDLER_COUNT_KEY}=10 lower than min 15 for nsId nn1", where 10 > is hander count & 15 is sum of dedicated handler count. > Related to: HDFS-14090 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc
[ https://issues.apache.org/jira/browse/HDFS-15929?focusedWorklogId=573152=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573152 ] ASF GitHub Bot logged work on HDFS-15929: - Author: ASF GitHub Bot Created on: 28/Mar/21 09:06 Start Date: 28/Mar/21 09:06 Worklog Time Spent: 10m Work Description: GauthamBanasandra opened a new pull request #2826: URL: https://github.com/apache/hadoop/pull/2826 * RAND_pseudo_bytes is deprecated in OpenSSL 1.1.1 and must be replaced by RAND_bytes. * Refactored usages of GetRandomClientName where it now returns the client name wrapped in a a shared_ptr to provide the ability to do null check. We use a null check as a means to get the functionality similar to monads in functional programming. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573152) Remaining Estimate: 0h Time Spent: 10m > Replace RAND_pseudo_bytes in util.cc > > > Key: HDFS-15929 > URL: https://issues.apache.org/jira/browse/HDFS-15929 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Critical > Time Spent: 10m > Remaining Estimate: 0h > > RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following > warning during compilation that it's deprecated - > {code} > [WARNING] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc: > warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated > [-Wdeprecated-declarations] > [WARNING] from > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc > [WARNING] /usr/include/openssl/rand.h:44:1: note: declared here > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc
[ https://issues.apache.org/jira/browse/HDFS-15929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-15929: -- Labels: pull-request-available (was: ) > Replace RAND_pseudo_bytes in util.cc > > > Key: HDFS-15929 > URL: https://issues.apache.org/jira/browse/HDFS-15929 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs++ >Affects Versions: 3.4.0 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Critical > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following > warning during compilation that it's deprecated - > {code} > [WARNING] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc: > warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated > [-Wdeprecated-declarations] > [WARNING] from > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc > [WARNING] /usr/include/openssl/rand.h:44:1: note: declared here > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15929) Replace RAND_pseudo_bytes in util.cc
Gautham Banasandra created HDFS-15929: - Summary: Replace RAND_pseudo_bytes in util.cc Key: HDFS-15929 URL: https://issues.apache.org/jira/browse/HDFS-15929 Project: Hadoop HDFS Issue Type: Bug Components: libhdfs++ Affects Versions: 3.4.0 Reporter: Gautham Banasandra Assignee: Gautham Banasandra RAND_pseudo_bytes was deprecated in OpenSSL 1.1.1. We get the following warning during compilation that it's deprecated - {code} [WARNING] /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc: warning: 'int RAND_pseudo_bytes(unsigned char*, int)' is deprecated [-Wdeprecated-declarations] [WARNING] from /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2792/src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/util.cc [WARNING] /usr/include/openssl/rand.h:44:1: note: declared here {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15926) hadoop-annotations is duplicated in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HDFS-15926?focusedWorklogId=573142=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573142 ] ASF GitHub Bot logged work on HDFS-15926: - Author: ASF GitHub Bot Created on: 28/Mar/21 08:35 Start Date: 28/Mar/21 08:35 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #2823: URL: https://github.com/apache/hadoop/pull/2823#issuecomment-808865841 It's coming through parent pom `hadoop-project-dist` defined [here](https://github.com/apache/hadoop/blob/e5de76a686c9870413890c8256c9fc04a72abe1b/hadoop-hdfs-project/hadoop-hdfs/pom.xml#L20-L25) It is declared [here](https://github.com/apache/hadoop/blob/trunk/hadoop-project-dist/pom.xml#L54-L58) and since it is coming from parent pom, it's duplicate for `hadoop-hdfs` pom. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573142) Time Spent: 50m (was: 40m) > hadoop-annotations is duplicated in hadoop-hdfs > --- > > Key: HDFS-15926 > URL: https://issues.apache.org/jira/browse/HDFS-15926 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > hadoop-annotations is duplicated dependency in hadoop-hdfs as it is also > declared in parent hadoop-project-dist pom. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15926) hadoop-annotations is duplicated in hadoop-hdfs
[ https://issues.apache.org/jira/browse/HDFS-15926?focusedWorklogId=573137=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-573137 ] ASF GitHub Bot logged work on HDFS-15926: - Author: ASF GitHub Bot Created on: 28/Mar/21 07:45 Start Date: 28/Mar/21 07:45 Worklog Time Spent: 10m Work Description: ayushtkn commented on pull request #2823: URL: https://github.com/apache/hadoop/pull/2823#issuecomment-808860891 What do you mean by duplicate? It is there in POM twice(Which I couldn't find), or it is coming in through transitive way as well? may be through hadoop-common -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 573137) Time Spent: 40m (was: 0.5h) > hadoop-annotations is duplicated in hadoop-hdfs > --- > > Key: HDFS-15926 > URL: https://issues.apache.org/jira/browse/HDFS-15926 > Project: Hadoop HDFS > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > hadoop-annotations is duplicated dependency in hadoop-hdfs as it is also > declared in parent hadoop-project-dist pom. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org