bshashikant commented on a change in pull request #2452:
URL: https://github.com/apache/ozone/pull/2452#discussion_r674861974
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
##########
@@ -227,6 +231,25 @@ private RaftProperties newRaftProperties() {
// set the configs enable and set the stateMachineData sync timeout
RaftServerConfigKeys.Log.StateMachineData.setSync(properties, true);
+
+ dataStreamPort = conf.getInt(
+ OzoneConfigKeys.DFS_CONTAINER_RATIS_DATASTREAM_IPC_PORT,
+ OzoneConfigKeys.DFS_CONTAINER_RATIS_DATASTREAM_IPC_PORT_DEFAULT);
+ // set the datastream config
+ NettyConfigKeys.DataStream.setPort(properties, dataStreamPort);
+ RaftConfigKeys.DataStream.setType(properties,
+ SupportedDataStreamType.NETTY);
Review comment:
can we do these stream specfic settings in a different function?
##########
File path:
hadoop-hdds/interface-client/src/main/proto/DatanodeClientProtocol.proto
##########
@@ -391,7 +393,7 @@ enum ChecksumType {
message WriteChunkRequestProto {
required DatanodeBlockID blockID = 1;
- required ChunkInfo chunkData = 2;
+ optional ChunkInfo chunkData = 2;
Review comment:
isn't this backword incompatible?
##########
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##########
@@ -53,6 +53,24 @@
<tag>OZONE, CONTAINER, MANAGEMENT</tag>
<description>The ipc port number of container.</description>
</property>
+ <property>
+ <name>dfs.container.ratis.datastream.ipc</name>
+ <value>9890</value>
+ <tag>OZONE, CONTAINER, RATIS, DATASTREAM</tag>
+ <description>The datastream ipc port number of container.</description>
+ </property>
+ <property>
+ <name>dfs.container.ratis.datastream.request.threads</name>
+ <value>20</value>
+ <tag>OZONE, CONTAINER, RATIS, DATASTREAM</tag>
Review comment:
Cab we move these settings to DatanodeRatisServerConfig and clinet
config keys separately?
##########
File path:
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
##########
@@ -230,6 +233,40 @@ public BlockManager getBlockManager() {
return this.blockManager;
}
+ ContainerCommandResponseProto handleStreamInit(
+ ContainerCommandRequestProto request, KeyValueContainer kvContainer,
+ DispatcherContext dispatcherContext) {
+ if (!request.hasWriteChunk()) {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Malformed Write Chunk request. trace ID: {}",
+ request.getTraceID());
+ }
+ return malformedRequest(request);
+ }
+
+ String path = null;
+ try {
+ checkContainerOpen(kvContainer);
+
+ WriteChunkRequestProto writeChunk = request.getWriteChunk();
+ BlockID blockID = BlockID.getFromProtobuf(writeChunk.getBlockID());
+
+ if (dispatcherContext == null) {
+ dispatcherContext = new DispatcherContext.Builder().build();
+ }
+
+ path = chunkManager
+ .streamInit(kvContainer, blockID, dispatcherContext);
Review comment:
I feel, we should set the dispacher contect as null here. Dispacher
context being null during write will signify its a stream opeartion.
Also, stream classes should be coupled with ChunkManager. Let's maintain a
clear separation. May be we can use StreamHandler. Common functionalities can
be moved to chunkUtils.
##########
File path:
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
##########
@@ -283,6 +284,13 @@ public ContainerCommandResponseProto sendCommand(
return responseProtoHashMap;
}
+ // TODO: We need separate XceiverClientRatis and XceiverClientGrpc instances
+ // and remove XceiverClientSpi
+ @Override
+ public DataStreamApi getDataStreamApi() {
+ return null;
+ }
Review comment:
Its better to throw an exception here.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]