[GitHub] [hadoop-ozone] mukul1987 commented on issue #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine.
mukul1987 commented on issue #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine. URL: https://github.com/apache/hadoop-ozone/pull/298#issuecomment-562825086 Thanks for the review @lokeshj1703. I have addressed the review comments. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine.
mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine. URL: https://github.com/apache/hadoop-ozone/pull/298#discussion_r355106759 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java ## @@ -862,4 +864,48 @@ public void notifyLeaderChanged(RaftGroupMemberId groupMemberId, RaftPeerId raftPeerId) { ratisServer.handleLeaderChangedNotification(groupMemberId, raftPeerId); } + + @Override + public String toStateMachineLogEntryString(StateMachineLogEntryProto proto) { +try { + ContainerCommandRequestProto requestProto = + getContainerCommandRequestProto(proto.getLogData()); + return getWriteChunkInfo(requestProto); +} catch (Throwable t) { + return ""; +} + } + + private String getWriteChunkInfo(ContainerCommandRequestProto requestProto) { +StringBuilder builder = new StringBuilder(); +Preconditions.checkArgument(requestProto.getCmdType() == Type.WriteChunk); + +long contId = requestProto.getContainerID(); +WriteChunkRequestProto wc = requestProto.getWriteChunk(); + +builder.append("cmd="); +builder.append(requestProto.getCmdType().toString()); + +builder.append(", container id="); +builder.append(contId); + +builder.append(", blockid="); +builder.append(contId); Review comment: Changed it to read the container ID from the blockID, in case it is different. :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine.
mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine. URL: https://github.com/apache/hadoop-ozone/pull/298#discussion_r355106763 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java ## @@ -862,4 +864,48 @@ public void notifyLeaderChanged(RaftGroupMemberId groupMemberId, RaftPeerId raftPeerId) { ratisServer.handleLeaderChangedNotification(groupMemberId, raftPeerId); } + + @Override + public String toStateMachineLogEntryString(StateMachineLogEntryProto proto) { +try { + ContainerCommandRequestProto requestProto = + getContainerCommandRequestProto(proto.getLogData()); + return getWriteChunkInfo(requestProto); +} catch (Throwable t) { + return ""; +} + } + + private String getWriteChunkInfo(ContainerCommandRequestProto requestProto) { +StringBuilder builder = new StringBuilder(); +Preconditions.checkArgument(requestProto.getCmdType() == Type.WriteChunk); + +long contId = requestProto.getContainerID(); +WriteChunkRequestProto wc = requestProto.getWriteChunk(); + +builder.append("cmd="); +builder.append(requestProto.getCmdType().toString()); + +builder.append(", container id="); +builder.append(contId); + +builder.append(", blockid="); +builder.append(contId); +builder.append(":localid="); +builder.append(wc.getBlockID().getLocalID()); + +builder.append(", chunk="); +builder.append(wc.getChunkData().getChunkName()); +builder.append(":offset="); +builder.append(wc.getChunkData().getOffset()); +builder.append(":length="); +builder.append(wc.getChunkData().getLen()); + +Container cont = containerController.getContainer(contId); +if (cont != null) { + builder.append(", container path="); + builder.append(cont.getContainerData().getContainerPath()); +} +return builder.toString(); Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine.
mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine. URL: https://github.com/apache/hadoop-ozone/pull/298#discussion_r355106742 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java ## @@ -862,4 +864,48 @@ public void notifyLeaderChanged(RaftGroupMemberId groupMemberId, RaftPeerId raftPeerId) { ratisServer.handleLeaderChangedNotification(groupMemberId, raftPeerId); } + + @Override + public String toStateMachineLogEntryString(StateMachineLogEntryProto proto) { +try { + ContainerCommandRequestProto requestProto = + getContainerCommandRequestProto(proto.getLogData()); + return getWriteChunkInfo(requestProto); +} catch (Throwable t) { + return ""; +} + } + + private String getWriteChunkInfo(ContainerCommandRequestProto requestProto) { +StringBuilder builder = new StringBuilder(); +Preconditions.checkArgument(requestProto.getCmdType() == Type.WriteChunk); + +long contId = requestProto.getContainerID(); +WriteChunkRequestProto wc = requestProto.getWriteChunk(); + +builder.append("cmd="); +builder.append(requestProto.getCmdType().toString()); + +builder.append(", container id="); +builder.append(contId); + +builder.append(", blockid="); +builder.append(contId); Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine.
mukul1987 commented on a change in pull request #298: HDDS-2389. add toStateMachineLogEntryString provider in Ozone's ContainerStateMachine. URL: https://github.com/apache/hadoop-ozone/pull/298#discussion_r355106250 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java ## @@ -862,4 +864,48 @@ public void notifyLeaderChanged(RaftGroupMemberId groupMemberId, RaftPeerId raftPeerId) { ratisServer.handleLeaderChangedNotification(groupMemberId, raftPeerId); } + + @Override + public String toStateMachineLogEntryString(StateMachineLogEntryProto proto) { +try { + ContainerCommandRequestProto requestProto = + getContainerCommandRequestProto(proto.getLogData()); + return getWriteChunkInfo(requestProto); +} catch (Throwable t) { + return ""; +} + } + + private String getWriteChunkInfo(ContainerCommandRequestProto requestProto) { +StringBuilder builder = new StringBuilder(); +Preconditions.checkArgument(requestProto.getCmdType() == Type.WriteChunk); Review comment: Done, added a switch case with only one handling for writeChunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2504) Handle InterruptedException properly
[ https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990375#comment-16990375 ] YiSheng Lien commented on HDDS-2504: Thanks [~dineshchitlangia] for the comment. Would we log the exception like *LOG.info()* or *LOG.error()* as [~xyao] mentioned in HDDS-2555 ? > Handle InterruptedException properly > > > Key: HDDS-2504 > URL: https://issues.apache.org/jira/browse/HDDS-2504 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Attila Doroszlai >Assignee: Dinesh Chitlangia >Priority: Major > Labels: newbie, sonar > > {quote}Either re-interrupt or rethrow the {{InterruptedException}} > {quote} > in several files (42 issues) > [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG] > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2682) OM File create request does not check for existing directory with the same name
[ https://issues.apache.org/jira/browse/HDDS-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990333#comment-16990333 ] Supratim Deka commented on HDDS-2682: - as it stands today and if we do nothing about it, yes. However, if a bucket(or volume) must provide FS access as well as object access on the same set of data, then the object store service needs to comply with the constraints imposed by the FS service. There won't be any choice there, right? Or do I miss something? Not getting into how we will deal with the consequences - only trying to understand if "unified"(object+file) access is even possible without honouring the constraints inside a FS namespace. > OM File create request does not check for existing directory with the same > name > --- > > Key: HDDS-2682 > URL: https://issues.apache.org/jira/browse/HDDS-2682 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Supratim Deka >Assignee: Supratim Deka >Priority: Major > > Assume the following sequence of operations/requests: > Req 1. create file -> /d1/d2/d3/d4/k1 (d3 implicitly is a sub-directory > inside /d1/d2) > Req 2. create file -> /d1/d2/d3 (d3 as a file inside /d1/d2) > When processing request 2, OMFileCreateRequest needs to check if 'd1/d2/d3' > is the name of an existing file or an existing directory. In which case the > request has to fail. > Currently for request 2, OM will check explicitly if there is a key > '/d1/d2/d3' in the key table. > Also for non-recursive create requests, OM will check if parent directory > /d1/d2 already exists. For this, the OM iterates the key table to check if > 'd1/d2' occurs as a prefix of any key in the key table - checkKeysUnderPath() > What is missing in current behaviour? > For OM File create, the table iterator must also determine if '/d1/d2/d3' > exists as a prefix for any key in the key table - not just '/d1/d2'. > This fix is required for the correctness of OzoneFS namespace. There is a > potential performance impact - which is outside the scope of this jira and > will be addressed separately. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://
[ https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990301#comment-16990301 ] Siyao Meng commented on HDDS-2665: -- Hi [~ayushtkn], thanks for the comment. Yes that is indeed one big motivation behind this. The difference is the federation is at volume level rather than cluster level. We worry that users would just conveniently stick to one big bucket with the existing Ozone Client FS (i.e. just set {{fs.defaultFS}} to a fixed bucket and call it a day), which can limit scalability. For now it will only have volumes (except {{ofs://tmp}}) at root. Maybe we would allow custom mount points in the future, since {{ofs://tmp}} is already one. > Implement new Ozone Filesystem scheme ofs:// > > > Key: HDDS-2665 > URL: https://issues.apache.org/jira/browse/HDDS-2665 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: Design ofs v1.pdf > > > Implement a new scheme for Ozone Filesystem where all volumes (and buckets) > can be access from a single root. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2581) Use Java Configs for OM HA
[ https://issues.apache.org/jira/browse/HDDS-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Teoh reassigned HDDS-2581: Assignee: Chris Teoh > Use Java Configs for OM HA > -- > > Key: HDDS-2581 > URL: https://issues.apache.org/jira/browse/HDDS-2581 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Chris Teoh >Priority: Major > Labels: newbie > > This Jira is created based on the comments from [~aengineer] during HDDS-2536 > review. > Can we please use the Java Configs instead of this old-style config to add a > config? > > This Jira it to make all HA OM config to the new style (Java config based > approach) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2687) Sonar: try-with-resources fix in OzoneManager and ReconUtils
Aravindan Vijayan created HDDS-2687: --- Summary: Sonar: try-with-resources fix in OzoneManager and ReconUtils Key: HDDS-2687 URL: https://issues.apache.org/jira/browse/HDDS-2687 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Recon Reporter: Aravindan Vijayan Assignee: Aravindan Vijayan Fix For: 0.5.0 https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-jQKcVY8lQ4Zr9R=AW5md-jQKcVY8lQ4Zr9R https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VMKcVY8lQ4Zrsk=AW5md-VMKcVY8lQ4Zrsk -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] hanishakoneru commented on issue #304: HDDS-1993. Merge OzoneManagerRequestHandler and OzoneManagerHARequest…
hanishakoneru commented on issue #304: HDDS-1993. Merge OzoneManagerRequestHandler and OzoneManagerHARequest… URL: https://github.com/apache/hadoop-ozone/pull/304#issuecomment-562782003 Thanks @bharatviswa504 for working on this. LGTM. +1. What do you think of having the similar structure for ReadRequests as we have for WriteRequests? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2682) OM File create request does not check for existing directory with the same name
[ https://issues.apache.org/jira/browse/HDDS-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990213#comment-16990213 ] Anu Engineer commented on HDDS-2682: But I can still create this via putKey ? > OM File create request does not check for existing directory with the same > name > --- > > Key: HDDS-2682 > URL: https://issues.apache.org/jira/browse/HDDS-2682 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Manager >Reporter: Supratim Deka >Assignee: Supratim Deka >Priority: Major > > Assume the following sequence of operations/requests: > Req 1. create file -> /d1/d2/d3/d4/k1 (d3 implicitly is a sub-directory > inside /d1/d2) > Req 2. create file -> /d1/d2/d3 (d3 as a file inside /d1/d2) > When processing request 2, OMFileCreateRequest needs to check if 'd1/d2/d3' > is the name of an existing file or an existing directory. In which case the > request has to fail. > Currently for request 2, OM will check explicitly if there is a key > '/d1/d2/d3' in the key table. > Also for non-recursive create requests, OM will check if parent directory > /d1/d2 already exists. For this, the OM iterates the key table to check if > 'd1/d2' occurs as a prefix of any key in the key table - checkKeysUnderPath() > What is missing in current behaviour? > For OM File create, the table iterator must also determine if '/d1/d2/d3' > exists as a prefix for any key in the key table - not just '/d1/d2'. > This fix is required for the correctness of OzoneFS namespace. There is a > potential performance impact - which is outside the scope of this jira and > will be addressed separately. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2601) Fix Broken Link In Website
[ https://issues.apache.org/jira/browse/HDDS-2601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16990212#comment-16990212 ] Anu Engineer commented on HDDS-2601: Yes, going directly to Ozone makes sense. We don't need to link to Hadoop at all. Thanks > Fix Broken Link In Website > -- > > Key: HDDS-2601 > URL: https://issues.apache.org/jira/browse/HDDS-2601 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Sandeep Nemuri >Priority: Minor > > In the FAQ Page : > https://hadoop.apache.org/ozone/faq/ > The last line points to how to contribute. That seems broken : > As of now it is : > https://wiki.apache.org/hadoop/HowToContribute > it should be : > https://cwiki.apache.org/confluence/display/hadoop/How+To+Contribute -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] hanishakoneru merged pull request #319: HDDS-1991. Remove RatisClient in OM HA.
hanishakoneru merged pull request #319: HDDS-1991. Remove RatisClient in OM HA. URL: https://github.com/apache/hadoop-ozone/pull/319 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] hanishakoneru commented on issue #319: HDDS-1991. Remove RatisClient in OM HA.
hanishakoneru commented on issue #319: HDDS-1991. Remove RatisClient in OM HA. URL: https://github.com/apache/hadoop-ozone/pull/319#issuecomment-562769538 Thanks for taking care of this @bharatviswa504 . LGTM. +1 CI run is green. I will merge this PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] swagle commented on issue #313: HDDS-2242. Avoid unnecessary rpc needed to discover the pipeline leader.
swagle commented on issue #313: HDDS-2242. Avoid unnecessary rpc needed to discover the pipeline leader. URL: https://github.com/apache/hadoop-ozone/pull/313#issuecomment-562755091 Acceptance test error if unrelated: [ERROR] Plugin org.jacoco:jacoco-maven-plugin:0.8.3 or one of its dependencies could not be resolved This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on issue #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
adoroszlai commented on issue #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#issuecomment-562728561 Thanks @xiaoyuyao for reviewing this. I have updated the change with your suggestions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r355014301 ## File path: hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java ## @@ -1113,7 +1115,7 @@ public void testScmRegisterNodeWith4LayerNetworkTopology() try (SCMNodeManager nodeManager = createNodeManager(conf)) { DatanodeDetails[] nodes = new DatanodeDetails[nodeCount]; for (int i = 0; i < nodeCount; i++) { -DatanodeDetails node = TestUtils.createDatanodeDetails( +DatanodeDetails node = MockDatanodeDetails.createDatanodeDetails( Review comment: The static import was previously only for another method, but added this one as well. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r355014019 ## File path: hadoop-hdds/pom.xml ## @@ -186,6 +193,13 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;> ${junit.jupiter.version} test + + +org.mockito +mockito-core +2.2.0 Review comment: I took this one from `container-service`. It turns out we have quite a few different version numbers. So I defined a property for the latest 2.x in root `pom.xml` in 0ebb158fcc793651aa19b3a43bc906fcaa43ae59. I left Mockito 1.x usage and versions as is, because it's not API-compatible with 2.x. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r355013256 ## File path: hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/SimpleConfiguration.java ## @@ -22,7 +22,7 @@ /** * Example configuration to test the configuration injection. */ -@ConfigGroup(prefix = "ozone.scm.client") +@ConfigGroup(prefix = "fake.scm.client") Review comment: Good idea, thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
adoroszlai commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r355013206 ## File path: hadoop-hdds/container-service/pom.xml ## @@ -33,6 +33,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;> org.apache.hadoop hadoop-hdds-common + + org.apache.hadoop + hadoop-hdds-common + test-jar Review comment: Yes, AFAIK that makes the test code from `hadoop-hdds-common` available to the other project. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] dineshchitlangia commented on a change in pull request #314: HDDS-2555. Handle InterruptedException in XceiverClientGrpc
dineshchitlangia commented on a change in pull request #314: HDDS-2555. Handle InterruptedException in XceiverClientGrpc URL: https://github.com/apache/hadoop-ozone/pull/314#discussion_r355010203 ## File path: hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java ## @@ -231,8 +231,11 @@ public ContainerCommandResponseProto sendCommand( try { return sendCommandWithTraceIDAndRetry(request, null). getResponse().get(); -} catch (ExecutionException | InterruptedException e) { +} catch (ExecutionException e) { throw new IOException("Failed to execute command " + request, e); +} catch (InterruptedException e) { + Thread.currentThread().interrupt(); Review comment: Would you prefer something like: `LOG.error("Execution was interrupted", e)` ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r354992270 ## File path: hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestSCMNodeManager.java ## @@ -1113,7 +1115,7 @@ public void testScmRegisterNodeWith4LayerNetworkTopology() try (SCMNodeManager nodeManager = createNodeManager(conf)) { DatanodeDetails[] nodes = new DatanodeDetails[nodeCount]; for (int i = 0; i < nodeCount; i++) { -DatanodeDetails node = TestUtils.createDatanodeDetails( +DatanodeDetails node = MockDatanodeDetails.createDatanodeDetails( Review comment: NIT: MockDatanodeDetails can be removed as static import is added. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r354991408 ## File path: hadoop-hdds/pom.xml ## @@ -186,6 +193,13 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;> ${junit.jupiter.version} test + + +org.mockito +mockito-core +2.2.0 Review comment: NIT: can we replace the hard-coded package version? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r354991142 ## File path: hadoop-hdds/container-service/pom.xml ## @@ -33,6 +33,11 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;> org.apache.hadoop hadoop-hdds-common + + org.apache.hadoop + hadoop-hdds-common + test-jar Review comment: Do we need hadoop-hdds-common as a test-jar type for container-service? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2386) Implement incremental ChunkBuffer
[ https://issues.apache.org/jira/browse/HDDS-2386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz-wo Sze resolved HDDS-2386. -- Fix Version/s: 0.5.0 Resolution: Fixed Thanks [~xyao] for reviewing and merging the pull request. Resolving this. > Implement incremental ChunkBuffer > - > > Key: HDDS-2386 > URL: https://issues.apache.org/jira/browse/HDDS-2386 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Client >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: o2386_20191030.patch, o2386_20191031b.patch > > Time Spent: 20m > Remaining Estimate: 0h > > HDDS-2375 introduces a ChunkBuffer for flexible buffering. In this JIRA, we > implement ChunkBuffer with an incremental buffering so that the memory spaces > are allocated incrementally. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
xiaoyuyao commented on a change in pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322#discussion_r354986404 ## File path: hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/SimpleConfiguration.java ## @@ -22,7 +22,7 @@ /** * Example configuration to test the configuration injection. */ -@ConfigGroup(prefix = "ozone.scm.client") +@ConfigGroup(prefix = "fake.scm.client") Review comment: can we name the prefix like test.scm.client? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] supratimdeka commented on a change in pull request #317: HDDS-2668. Sonar : fix issues reported in BlockManagerImpl
supratimdeka commented on a change in pull request #317: HDDS-2668. Sonar : fix issues reported in BlockManagerImpl URL: https://github.com/apache/hadoop-ozone/pull/317#discussion_r354957352 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java ## @@ -162,8 +164,8 @@ public BlockData getBlock(Container container, BlockID blockID) } byte[] kData = db.getStore().get(Longs.toByteArray(blockID.getLocalID())); if (kData == null) { -throw new StorageContainerException("Unable to find the block." + -blockID, NO_SUCH_BLOCK); +throw new StorageContainerException(NO_SUCH_BLOCK_ERR_MSG + blockID, +NO_SUCH_BLOCK); } Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] anuengineer commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2
anuengineer commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2 URL: https://github.com/apache/hadoop-ozone/pull/323#discussion_r354949057 ## File path: hadoop-hdds/common/src/main/proto/SCMSecurityProtocol.proto ## @@ -21,6 +21,7 @@ * Please see http://wiki.apache.org/hadoop/Compatibility * for what changes are allowed for a *unstable* .proto interface. */ +syntax = "proto2"; Review comment: Compiling only. So no, we cannot move to Protoc3 in this patch. That would require some serious testing and work. But it is an amazing goal to have. I have probably try to change the DN protocol since it is on GRPC and independent of Hadoop RPC libraries. Eventually, we will get there, but baby steps. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2
mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2 URL: https://github.com/apache/hadoop-ozone/pull/323#discussion_r354945410 ## File path: pom.xml ## @@ -149,7 +149,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xs -2.5.0 +3.10.0 ${env.HADOOP_PROTOC_PATH} Review comment: protoc path will not really be required after this changed right ? Can this we rmeoved ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2
mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2 URL: https://github.com/apache/hadoop-ozone/pull/323#discussion_r354946504 ## File path: hadoop-hdds/pom.xml ## @@ -229,6 +229,16 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;> picocli 3.9.6 + + io.grpc + grpc-stub Review comment: What is the requirement of this grpc dependency ? We already have grpc dependency for datanode client proto files. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2
mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2 URL: https://github.com/apache/hadoop-ozone/pull/323#discussion_r354947213 ## File path: pom.xml ## @@ -149,7 +149,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xs -2.5.0 +3.10.0 Review comment: Also can this be the same as the "3.5.0" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2
mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2 URL: https://github.com/apache/hadoop-ozone/pull/323#discussion_r354947053 ## File path: pom.xml ## @@ -149,7 +149,7 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xs -2.5.0 +3.10.0 Review comment: Can we rename this to something like hadoop-protobuf-compile version ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2
mukul1987 commented on a change in pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2 URL: https://github.com/apache/hadoop-ozone/pull/323#discussion_r354945926 ## File path: hadoop-hdds/common/src/main/proto/SCMSecurityProtocol.proto ## @@ -21,6 +21,7 @@ * Please see http://wiki.apache.org/hadoop/Compatibility * for what changes are allowed for a *unstable* .proto interface. */ +syntax = "proto2"; Review comment: I didnt' understand it fully, if we are compiling the files using proto3, can we also move the format to proto3 as well ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2686) Use protobuf 3 instead of protobuf 2
[ https://issues.apache.org/jira/browse/HDDS-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek reassigned HDDS-2686: - Assignee: Marton Elek > Use protobuf 3 instead of protobuf 2 > > > Key: HDDS-2686 > URL: https://issues.apache.org/jira/browse/HDDS-2686 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Protobuf2 is 4.5 years old, Hadoop trunk already upgraded to use 3.x protobuf. > > Would be great to use recent protobuf version which can also provide > performance benefit and using new features. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2686) Use protobuf 3 instead of protobuf 2
[ https://issues.apache.org/jira/browse/HDDS-2686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2686: - Labels: pull-request-available (was: ) > Use protobuf 3 instead of protobuf 2 > > > Key: HDDS-2686 > URL: https://issues.apache.org/jira/browse/HDDS-2686 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Marton Elek >Priority: Major > Labels: pull-request-available > > Protobuf2 is 4.5 years old, Hadoop trunk already upgraded to use 3.x protobuf. > > Would be great to use recent protobuf version which can also provide > performance benefit and using new features. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek opened a new pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2
elek opened a new pull request #323: HDDS-2686. Use protobuf 3 instead of protobuf 2 URL: https://github.com/apache/hadoop-ozone/pull/323 ## What changes were proposed in this pull request? Protobuf2 is 4.5 years old, Hadoop trunk already upgraded to use 3.x protobuf. Would be great to use recent protobuf version which can also provide performance benefit and using new features. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-2686 ## How was this patch tested? Run a few acceptance test locally (secure/unsecure) and they are passed. Cluster worked well. Waiting for a full acceptance run. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2686) Use protobuf 3 instead of protobuf 2
Marton Elek created HDDS-2686: - Summary: Use protobuf 3 instead of protobuf 2 Key: HDDS-2686 URL: https://issues.apache.org/jira/browse/HDDS-2686 Project: Hadoop Distributed Data Store Issue Type: Improvement Reporter: Marton Elek Protobuf2 is 4.5 years old, Hadoop trunk already upgraded to use 3.x protobuf. Would be great to use recent protobuf version which can also provide performance benefit and using new features. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2684) Refactor common test utilities to hadoop-hdds/common
[ https://issues.apache.org/jira/browse/HDDS-2684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Doroszlai updated HDDS-2684: --- Status: Patch Available (was: In Progress) > Refactor common test utilities to hadoop-hdds/common > > > Key: HDDS-2684 > URL: https://issues.apache.org/jira/browse/HDDS-2684 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Expose test code from {{hadoop-hdds/common}} to other modules. Move some > "common" test utilities. Example: random {{DatanodeDetails}} creation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://
[ https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989830#comment-16989830 ] Ayush Saxena commented on HDDS-2665: Thanx [~smeng] for sharing the initiative and the design. Is the idea somewhat like making ofs:// as a federation layer? Reading the design doc, it gave me a feeling like that as ofs:// is federation layer with mount points, the restrictions too felt similar, apart from admin, no body can create at root,(Since that would be a mount entry in federation). if so, you may take refrences out from viewfs or rbf for the logics. > Implement new Ozone Filesystem scheme ofs:// > > > Key: HDDS-2665 > URL: https://issues.apache.org/jira/browse/HDDS-2665 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: Design ofs v1.pdf > > > Implement a new scheme for Ozone Filesystem where all volumes (and buckets) > can be access from a single root. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://
[ https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989830#comment-16989830 ] Ayush Saxena edited comment on HDDS-2665 at 12/6/19 2:41 PM: - Thanx [~smeng] for the initiative and the design. Is the idea somewhat like making ofs:// as a federation layer? Reading the design doc, it gave me a feeling like that as ofs:// is federation layer with mount points, the restrictions too felt similar, apart from admin, no body can create at root,(Since that would be a mount entry in federation). if so, you may take refrences out from viewfs or rbf for the logics. was (Author: ayushtkn): Thanx [~smeng] for sharing the initiative and the design. Is the idea somewhat like making ofs:// as a federation layer? Reading the design doc, it gave me a feeling like that as ofs:// is federation layer with mount points, the restrictions too felt similar, apart from admin, no body can create at root,(Since that would be a mount entry in federation). if so, you may take refrences out from viewfs or rbf for the logics. > Implement new Ozone Filesystem scheme ofs:// > > > Key: HDDS-2665 > URL: https://issues.apache.org/jira/browse/HDDS-2665 > Project: Hadoop Distributed Data Store > Issue Type: New Feature >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: Design ofs v1.pdf > > > Implement a new scheme for Ozone Filesystem where all volumes (and buckets) > can be access from a single root. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2681) Add leak detection memory flags to MiniOzoneChaosCluster
[ https://issues.apache.org/jira/browse/HDDS-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek resolved HDDS-2681. --- Fix Version/s: 0.5.0 Resolution: Fixed > Add leak detection memory flags to MiniOzoneChaosCluster > > > Key: HDDS-2681 > URL: https://issues.apache.org/jira/browse/HDDS-2681 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: chaos, test >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > This jira proposes to add some of netty and native memory tracking flags. > -Dio.netty.leakDetection.level=advanced and -XX:NativeMemoryTracking=detail > to help debug some of the native allocations -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek closed pull request #318: HDDS-2681. Add leak detection memory flags to MiniOzoneChaosCluster.
elek closed pull request #318: HDDS-2681. Add leak detection memory flags to MiniOzoneChaosCluster. URL: https://github.com/apache/hadoop-ozone/pull/318 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-2662) Update gRPC and datanode protobuf version in Ozone
[ https://issues.apache.org/jira/browse/HDDS-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek resolved HDDS-2662. --- Fix Version/s: 0.5.0 Resolution: Fixed > Update gRPC and datanode protobuf version in Ozone > -- > > Key: HDDS-2662 > URL: https://issues.apache.org/jira/browse/HDDS-2662 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 20m > Remaining Estimate: 0h > > This jira is in continuation of RATIS-752. With Ozone updated to latest ratis > snapshot, the protobuf and grpc compiler version can be updated as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common
adoroszlai opened a new pull request #322: HDDS-2684. Refactor common test utilities to hadoop-hdds/common URL: https://github.com/apache/hadoop-ozone/pull/322 ## What changes were proposed in this pull request? Expose test code from `hadoop-hdds/common` to other modules. Move some "common" test utilities (random `DatanodeDetails` and `Pipeline` creation). The goal is to be able to write unit tests for `client` module without code duplication. This was partly extracted from #271: * Avoid using real config properties (eg. `ozone.scm.client.bind.host`) for testing config file generation. The generated config file is picked up by other tests and causes failures. * Rename Log4J2 config file used by audit logger test to avoid creating an untracked `audit.log` (similar to [HDDS-2063](https://issues.apache.org/jira/browse/HDDS-2063)). This would happen if some test starts components which use audit logger, and it picks up `log4j2.properties` by default. https://issues.apache.org/jira/browse/HDDS-2684 ## How was this patch tested? https://github.com/adoroszlai/hadoop-ozone/runs/336769633 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] ayushtkn commented on issue #321: HDDS-2685. Fix Rename API in BasicOzoneFileSystem
ayushtkn commented on issue #321: HDDS-2685. Fix Rename API in BasicOzoneFileSystem URL: https://github.com/apache/hadoop-ozone/pull/321#issuecomment-562588917 Thanx @adoroszlai Have Fixed CheckStyle and Updated the description. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2685) Fix Rename API in BasicOzoneFileSystem
[ https://issues.apache.org/jira/browse/HDDS-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDDS-2685: --- Description: In the Rename API : 1. This doesn't work if one of the path contains URI and other doesn't. {code:java} if (src.equals(dst)) { return true; } {code} 2. This check is suppose to be done only for directories, but is done for Files too, it can be moved after getting the FileStatus and checking the type. {code:java} // Cannot rename a directory to its own subdirectory Path dstParent = dst.getParent(); while (dstParent != null && !src.equals(dstParent)) { dstParent = dstParent.getParent(); } Preconditions.checkArgument(dstParent == null, "Cannot rename a directory to its own subdirectory"); {code} 3. This too doesn't work (similar to 1.) {code:java} if (srcStatus.isDirectory()) { if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) { LOG.trace("Cannot rename a directory to a subdirectory of self"); return false; } {code} 4. Rename is even success if the URI provided is of different FileSystem. In general HDFS/Other FS shall throw IllegalArgumentException if the path doesn't belong to the same FS. was: In the Rename API : 1. This doesn't work if one of the path contains URI and other doesn't. {code:java} if (src.equals(dst)) { return true; } {code} 2. This check is suppose to be done only for directories, but is done for Files too, it can be moved after getting the FileStatus and checking the type. {code:java} // Some comments here public String getFoo() { return foo; } {code} 3. This too doesn't work (similar to 1.) {code:java} if (srcStatus.isDirectory()) { if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) { LOG.trace("Cannot rename a directory to a subdirectory of self"); return false; } {code} 4. Rename is even success if the URI provided is of different FileSystem. In general HDFS/Other FS shall throw IllegalArgumentException if the path doesn't belong to the same FS. > Fix Rename API in BasicOzoneFileSystem > -- > > Key: HDDS-2685 > URL: https://issues.apache.org/jira/browse/HDDS-2685 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In the Rename API : > 1. This doesn't work if one of the path contains URI and other doesn't. > {code:java} > if (src.equals(dst)) { > return true; > } > {code} > 2. This check is suppose to be done only for directories, but is done for > Files too, it can be moved after getting the FileStatus and checking the > type. > {code:java} > // Cannot rename a directory to its own subdirectory > Path dstParent = dst.getParent(); > while (dstParent != null && !src.equals(dstParent)) { > dstParent = dstParent.getParent(); > } > Preconditions.checkArgument(dstParent == null, > "Cannot rename a directory to its own subdirectory"); > {code} > 3. This too doesn't work (similar to 1.) > {code:java} > if (srcStatus.isDirectory()) { > if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) { > LOG.trace("Cannot rename a directory to a subdirectory of self"); > return false; > } > {code} > 4. Rename is even success if the URI provided is of different FileSystem. > In general HDFS/Other FS shall throw IllegalArgumentException if the path > doesn't belong to the same FS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2685) Fix Rename API in BasicOzoneFileSystem
[ https://issues.apache.org/jira/browse/HDDS-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDDS-2685: --- Status: Patch Available (was: Open) > Fix Rename API in BasicOzoneFileSystem > -- > > Key: HDDS-2685 > URL: https://issues.apache.org/jira/browse/HDDS-2685 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > In the Rename API : > 1. This doesn't work if one of the path contains URI and other doesn't. > {code:java} > if (src.equals(dst)) { > return true; > } > {code} > 2. This check is suppose to be done only for directories, but is done for > Files too, it can be moved after getting the FileStatus and checking the > type. > {code:java} > // Some comments here > public String getFoo() > { > return foo; > } > {code} > 3. This too doesn't work (similar to 1.) > {code:java} > if (srcStatus.isDirectory()) { > if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) { > LOG.trace("Cannot rename a directory to a subdirectory of self"); > return false; > } > {code} > 4. Rename is even success if the URI provided is of different FileSystem. > In general HDFS/Other FS shall throw IllegalArgumentException if the path > doesn't belong to the same FS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] ayushtkn opened a new pull request #321: HDDS-2685. Fix Rename API in BasicOzoneFileSystem
ayushtkn opened a new pull request #321: HDDS-2685. Fix Rename API in BasicOzoneFileSystem URL: https://github.com/apache/hadoop-ozone/pull/321 ## What changes were proposed in this pull request? In the Rename API : 1. This doesn't work if one of the path contains URI and other doesn't. if (src.equals(dst)) { return true; } 2. This check is suppose to be done only for directories, but is done for Files too, it can be moved after getting the FileStatus and checking the type. // Some comments here public String getFoo() { return foo; } 3. This too doesn't work (similar to 1.) if (srcStatus.isDirectory()) { if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) { LOG.trace("Cannot rename a directory to a subdirectory of self"); return false; } 4. Rename is even success if the URI provided is of different FileSystem. In general HDFS/Other FS shall throw IllegalArgumentException if the path doesn't belong to the same FS. ## What is the link to the Apache JIRA https://issues.apache.org/jira/browse/HDDS-2685 ## How was this patch tested? Added UT. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-2685) Fix Rename API in BasicOzoneFileSystem
[ https://issues.apache.org/jira/browse/HDDS-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDDS-2685: - Labels: pull-request-available (was: ) > Fix Rename API in BasicOzoneFileSystem > -- > > Key: HDDS-2685 > URL: https://issues.apache.org/jira/browse/HDDS-2685 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > > In the Rename API : > 1. This doesn't work if one of the path contains URI and other doesn't. > {code:java} > if (src.equals(dst)) { > return true; > } > {code} > 2. This check is suppose to be done only for directories, but is done for > Files too, it can be moved after getting the FileStatus and checking the > type. > {code:java} > // Some comments here > public String getFoo() > { > return foo; > } > {code} > 3. This too doesn't work (similar to 1.) > {code:java} > if (srcStatus.isDirectory()) { > if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) { > LOG.trace("Cannot rename a directory to a subdirectory of self"); > return false; > } > {code} > 4. Rename is even success if the URI provided is of different FileSystem. > In general HDFS/Other FS shall throw IllegalArgumentException if the path > doesn't belong to the same FS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Moved] (HDDS-2685) Fix Rename API in BasicOzoneFileSystem
[ https://issues.apache.org/jira/browse/HDDS-2685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena moved HDFS-15035 to HDDS-2685: --- Key: HDDS-2685 (was: HDFS-15035) Workflow: patch-available, re-open possible (was: no-reopen-closed, patch-avail) Project: Hadoop Distributed Data Store (was: Hadoop HDFS) > Fix Rename API in BasicOzoneFileSystem > -- > > Key: HDDS-2685 > URL: https://issues.apache.org/jira/browse/HDDS-2685 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > > In the Rename API : > 1. This doesn't work if one of the path contains URI and other doesn't. > {code:java} > if (src.equals(dst)) { > return true; > } > {code} > 2. This check is suppose to be done only for directories, but is done for > Files too, it can be moved after getting the FileStatus and checking the > type. > {code:java} > // Some comments here > public String getFoo() > { > return foo; > } > {code} > 3. This too doesn't work (similar to 1.) > {code:java} > if (srcStatus.isDirectory()) { > if (dst.toString().startsWith(src.toString() + OZONE_URI_DELIMITER)) { > LOG.trace("Cannot rename a directory to a subdirectory of self"); > return false; > } > {code} > 4. Rename is even success if the URI provided is of different FileSystem. > In general HDFS/Other FS shall throw IllegalArgumentException if the path > doesn't belong to the same FS. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found
[ https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Doroszlai updated HDDS-1812: --- Status: In Progress (was: Patch Available) > Du while calculating used disk space reports that chunk files are file not > found > > > Key: HDDS-1812 > URL: https://issues.apache.org/jira/browse/HDDS-1812 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Attila Doroszlai >Priority: Critical > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > {code} > 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could > not get disk usage information for path /data/3/ozone-0715 > ExitCodeException exitCode=1: du: cannot access > '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f > 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or > directory > du: cannot access > '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed > _chunk_1.tmp.1.14118091': No such file or directory > du: cannot access > '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b > _chunk_1.tmp.1.14118102': No such file or directory > du: cannot access > '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0 > _chunk_1.tmp.1.14118105': No such file or directory > du: cannot access > '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00 > _chunk_1.tmp.1.14118109': No such file or directory > du: cannot access > '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115': > No such file or directory > du: cannot access > '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127': > No such file or directory > at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008) > at org.apache.hadoop.util.Shell.run(Shell.java:901) > at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62) > at org.apache.hadoop.fs.DU.refresh(DU.java:53) > at > org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2684) Refactor common test utilities to hadoop-hdds/common
Attila Doroszlai created HDDS-2684: -- Summary: Refactor common test utilities to hadoop-hdds/common Key: HDDS-2684 URL: https://issues.apache.org/jira/browse/HDDS-2684 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: test Reporter: Attila Doroszlai Assignee: Attila Doroszlai Expose test code from {{hadoop-hdds/common}} to other modules. Move some "common" test utilities. Example: random {{DatanodeDetails}} creation. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #238: HDDS-2588. Consolidate compose environments
adoroszlai commented on a change in pull request #238: HDDS-2588. Consolidate compose environments URL: https://github.com/apache/hadoop-ozone/pull/238#discussion_r354788952 ## File path: hadoop-ozone/dist/src/main/compose/ozone/run.sh ## @@ -0,0 +1,20 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +declare -ix OZONE_REPLICATION_FACTOR +: ${OZONE_REPLICATION_FACTOR:=1} +docker-compose up --scale datanode=${OZONE_REPLICATION_FACTOR} --no-recreate "$@" Review comment: > if nothing has been changed and the docker-compose file set was the same. But the readme says freon compose file should be added only when datanodes are up, so the set is not the same. https://github.com/apache/hadoop-ozone/blob/76ad638b47232761a1732281188162e5c31308d8/hadoop-ozone/dist/src/main/compose/ozoneperf/README.md#L47-L51 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #238: HDDS-2588. Consolidate compose environments
adoroszlai commented on a change in pull request #238: HDDS-2588. Consolidate compose environments URL: https://github.com/apache/hadoop-ozone/pull/238#discussion_r354786194 ## File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml ## @@ -34,17 +37,19 @@ services: command: ["ozone","datanode"] om: <<: *common-config +env_file: + - docker-config + - om.conf ports: - 9874:9874 -environment: - ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION command: ["ozone","om"] scm: <<: *common-config ports: - 9876:9876 -environment: - ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION Review comment: ae2bc55261fff4950ac6cb82f7dfab1cd8e47034 moves these one-liners back to `docker-compose.yaml` and the separate files are no longer needed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #238: HDDS-2588. Consolidate compose environments
adoroszlai commented on a change in pull request #238: HDDS-2588. Consolidate compose environments URL: https://github.com/apache/hadoop-ozone/pull/238#discussion_r354786194 ## File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml ## @@ -34,17 +37,19 @@ services: command: ["ozone","datanode"] om: <<: *common-config +env_file: + - docker-config + - om.conf ports: - 9874:9874 -environment: - ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION command: ["ozone","om"] scm: <<: *common-config ports: - 9876:9876 -environment: - ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION Review comment: ae2bc55261fff4950ac6cb82f7dfab1cd8e47034 moves these one-liners back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on a change in pull request #238: HDDS-2588. Consolidate compose environments
elek commented on a change in pull request #238: HDDS-2588. Consolidate compose environments URL: https://github.com/apache/hadoop-ozone/pull/238#discussion_r354785530 ## File path: hadoop-ozone/dist/src/main/compose/ozone/run.sh ## @@ -0,0 +1,20 @@ +#!/usr/bin/env bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +declare -ix OZONE_REPLICATION_FACTOR +: ${OZONE_REPLICATION_FACTOR:=1} +docker-compose up --scale datanode=${OZONE_REPLICATION_FACTOR} --no-recreate "$@" Review comment: In my experience the docker-compose up worked well even from other terminal if nothing has been changed and the docker-compose file set was the same. Can we start the scm first with `docker-compose up -d scm` and after everything else with `docker-compose up -d` with this no-recreate? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on a change in pull request #238: HDDS-2588. Consolidate compose environments
elek commented on a change in pull request #238: HDDS-2588. Consolidate compose environments URL: https://github.com/apache/hadoop-ozone/pull/238#discussion_r354784590 ## File path: hadoop-ozone/dist/src/main/compose/ozone/docker-compose.yaml ## @@ -34,17 +37,19 @@ services: command: ["ozone","datanode"] om: <<: *common-config +env_file: + - docker-config + - om.conf ports: - 9874:9874 -environment: - ENSURE_OM_INITIALIZED: /data/metadata/om/current/VERSION command: ["ozone","om"] scm: <<: *common-config ports: - 9876:9876 -environment: - ENSURE_SCM_INITIALIZED: /data/metadata/scm/current/VERSION Review comment: I would prefer the separated common configs. Especially as we have only a few lines of settings they can be included in the common configs all together. But it's not a blocker for now, we can commit it (thanks to explain the reason behind the small files...) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on issue #300: HDDS-2662. Update gRPC and datanode protobuf version in Ozone.
elek commented on issue #300: HDDS-2662. Update gRPC and datanode protobuf version in Ozone. URL: https://github.com/apache/hadoop-ozone/pull/300#issuecomment-562534289 > Is it true? Is there any specific blocker issue or just a safety upgrade/ Discussed it offline with Mukul. It's mainly for safety but newer version also can provide additional performance benefit. It started a new PR check to have a green acceptance test will merge it after... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #317: HDDS-2668. Sonar : fix issues reported in BlockManagerImpl
adoroszlai commented on a change in pull request #317: HDDS-2668. Sonar : fix issues reported in BlockManagerImpl URL: https://github.com/apache/hadoop-ozone/pull/317#discussion_r354734193 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java ## @@ -162,8 +164,8 @@ public BlockData getBlock(Container container, BlockID blockID) } byte[] kData = db.getStore().get(Longs.toByteArray(blockID.getLocalID())); if (kData == null) { -throw new StorageContainerException("Unable to find the block." + -blockID, NO_SUCH_BLOCK); +throw new StorageContainerException(NO_SUCH_BLOCK_ERR_MSG + blockID, +NO_SUCH_BLOCK); } Review comment: Would it make sense to go further and extract this to a method that looks up block by `blockID`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org