pkuwm commented on a change in pull request #809: Add system property options
to config auto compression
URL: https://github.com/apache/helix/pull/809#discussion_r385472227
##########
File path:
zookeeper-api/src/main/java/org/apache/helix/zookeeper/datamodel/serializer/ZNRecordSerializer.java
##########
@@ -81,24 +82,39 @@ private static int getListFieldBound(ZNRecord record) {
serializationConfig.set(SerializationConfig.Feature.AUTO_DETECT_FIELDS,
true);
serializationConfig.set(SerializationConfig.Feature.CAN_OVERRIDE_ACCESS_MODIFIERS,
true);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
- byte[] serializedBytes;
+ byte[] serializedBytes = new byte[0];
try {
mapper.writeValue(baos, data);
serializedBytes = baos.toByteArray();
// apply compression if needed
- if (record.getBooleanField("enableCompression", false) ||
serializedBytes.length > ZNRecord.SIZE_LIMIT) {
+ if (ZNRecordUtil.shouldCompress(record, serializedBytes.length)) {
serializedBytes = GZipCompressionUtil.compress(serializedBytes);
}
} catch (Exception e) {
- logger.error("Exception during data serialization. Will not write to zk.
Data (first 1k): "
- + new String(baos.toByteArray()).substring(0, 1024), e);
+ if (serializedBytes.length == 0 ||
GZipCompressionUtil.isCompressed(serializedBytes)) {
+ serializedBytes = baos.toByteArray();
+ }
+ int firstBytesLength = Math.min(serializedBytes.length, 1024);
+ // TODO: remove logging first N bytes of data to reduce log size.
+ LOG.error("Exception during data serialization. Will not write to zk."
+ + " The first {} bytes of data: {}", firstBytesLength,
+ new String(serializedBytes, 0, firstBytesLength), e);
throw new ZkClientException(e);
}
- if (serializedBytes.length > ZNRecord.SIZE_LIMIT) {
- logger.error("Data size larger than 1M, ZNRecord.id: " + record.getId()
- + ". Will not write to zk. Data (first 1k): "
- + new String(serializedBytes).substring(0, 1024));
- throw new ZkClientException("Data size larger than 1M, ZNRecord.id: " +
record.getId());
+
+ int compressThreshold = ZNRecordUtil.getCompressThreshold();
+ if (serializedBytes.length > compressThreshold) {
+ if (GZipCompressionUtil.isCompressed(serializedBytes)) {
+ serializedBytes = baos.toByteArray();
+ }
+ int firstBytesLength = Math.min(serializedBytes.length, 1024);
+ // TODO: remove logging first N bytes of data to reduce log size.
+ LOG.error("Data size: {} is greater than {} bytes, ZNRecord.id: {}."
+ + " Data will not be written to Zookeeper. The first {} bytes of
data: {}",
+ serializedBytes.length, compressThreshold, record.getId(),
firstBytesLength,
+ new String(serializedBytes, 0, firstBytesLength));
+ throw new ZkClientException("Data size: " + serializedBytes.length + "
is greater than "
+ + compressThreshold + " bytes, ZNRecord.id: " + record.getId());
Review comment:
I thought about that. But with current logic, it must has been compressed if
we get this exception because currently we are using the same threshold for
compression and Znode limit.. I agree that isCompressed could help, if we are
using a different threshold for Znode size limit.
Update: with updated logic that provides a config to limit serialized
znrecord size, we log being compressed or not.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]