[ https://issues.apache.org/jira/browse/HDDS-2151?focusedWorklogId=315106&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315106 ]
ASF GitHub Bot logged work on HDDS-2151: ---------------------------------------- Author: ASF GitHub Bot Created on: 19/Sep/19 14:58 Start Date: 19/Sep/19 14:58 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on issue #1477: HDDS-2151. Ozone client logs the entire request payload at DEBUG level URL: https://github.com/apache/hadoop/pull/1477#issuecomment-533171126 @adoroszlai thanks for filing the patch. Overall +1 (non-binding), pending Jenkins. However, I do see checkstyle violations reported, which I believe will be fixed by HDDS-2154 logged by @elek ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 315106) Time Spent: 40m (was: 0.5h) > Ozone client prints the entire request payload in DEBUG level. > -------------------------------------------------------------- > > Key: HDDS-2151 > URL: https://issues.apache.org/jira/browse/HDDS-2151 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Reporter: Aravindan Vijayan > Assignee: YiSheng Lien > Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > In XceiverClientRatis.java:221, we have the following snippet where we have a > DEBUG line that prints out the entire Container Request proto. > {code} > ContainerCommandRequestProto finalPayload = > ContainerCommandRequestProto.newBuilder(request) > .setTraceID(TracingUtil.exportCurrentSpan()) > .build(); > boolean isReadOnlyRequest = HddsUtils.isReadOnly(finalPayload); > ByteString byteString = finalPayload.toByteString(); > LOG.debug("sendCommandAsync {} {}", isReadOnlyRequest, finalPayload); > return isReadOnlyRequest ? > getClient().sendReadOnlyAsync(() -> byteString) : > getClient().sendAsync(() -> byteString); > {code} > This causes OOM while writing large (~300MB) keys. > {code} > SLF4J: Failed toString() invocation on an object of type > [org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ContainerCommandRequestProto] > Reported exception: > java.lang.OutOfMemoryError: Java heap space > at java.util.Arrays.copyOf(Arrays.java:3332) > at > java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) > at > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649) > at java.lang.StringBuilder.append(StringBuilder.java:202) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:75) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormatEscaper.escapeBytes(TextFormatEscaper.java:94) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.escapeBytes(TextFormat.java:1836) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:436) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printFieldValue(TextFormat.java:449) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printSingleField(TextFormat.java:376) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.printField(TextFormat.java:338) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.print(TextFormat.java:325) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat$Printer.access$000(TextFormat.java:307) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.print(TextFormat.java:68) > at > org.apache.ratis.thirdparty.com.google.protobuf.TextFormat.printToString(TextFormat.java:148) > at > org.apache.ratis.thirdparty.com.google.protobuf.AbstractMessage.toString(AbstractMessage.java:117) > at > org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:299) > at > org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:271) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:233) > at > org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:173) > at org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:151) > at org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:252) > at > org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequestAsync(XceiverClientRatis.java:221) > at > org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommandAsync(XceiverClientRatis.java:302) > at > org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunkAsync(ContainerProtocolCalls.java:310) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunkToContainer(BlockOutputStream.java:601) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.writeChunk(BlockOutputStream.java:459) > at > org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:240) > at > org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:129) > SLF4J: Failed toString() invocation on an object of type > [org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos$ContainerCommandRequestProto] > Reported exception: > java.lang.OutOfMemoryError: Java heap space > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org