sjvanrossum commented on code in PR #24093:
URL: https://github.com/apache/beam/pull/24093#discussion_r1020638802


##########
sdks/java/core/src/main/java/org/apache/beam/sdk/coders/ZstdCoder.java:
##########
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.beam.sdk.coders;
+
+import com.github.luben.zstd.Zstd;
+import com.github.luben.zstd.ZstdCompressCtx;
+import com.github.luben.zstd.ZstdDecompressCtx;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.beam.sdk.util.CoderUtils;
+import 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableList;
+
+/**
+ * Wraps an existing coder with Zstandard compression. It makes sense to use 
this coder when it's
+ * likely that the encoded value is quite large and compressible or when a 
dictionary is available
+ * to improve compression performance.
+ */
+public class ZstdCoder<T> extends StructuredCoder<T> {
+  private final Coder<T> innerCoder;
+  private final @Nullable byte[] dict;
+  private final int level;
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder, byte[] dict, int 
level) {
+    return new ZstdCoder<>(innerCoder, dict, level);
+  }
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder, byte[] dict) {
+    return new ZstdCoder<>(innerCoder, dict, Zstd.defaultCompressionLevel());
+  }
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder, int level) {
+    return new ZstdCoder<>(innerCoder, null, level);
+  }
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder) {
+    return new ZstdCoder<>(innerCoder, null, Zstd.defaultCompressionLevel());
+  }
+
+  private ZstdCoder(Coder<T> innerCoder, @Nullable byte[] dict, int level) {
+    this.innerCoder = innerCoder;
+    this.dict = dict;
+    this.level = level;
+  }
+
+  @Override
+  public void encode(T value, OutputStream os) throws IOException {
+    ZstdCompressCtx ctx = new ZstdCompressCtx();
+    try {
+      ctx.setLevel(level);
+      ctx.setMagicless(true); // No magic since we know this will be 
compressed data on decode.
+      ctx.setDictID(false); // No dict ID since we initialize the coder with 
the expected dict.
+      ctx.loadDict(dict);
+
+      byte[] encoded = CoderUtils.encodeToByteArray(innerCoder, value);

Review Comment:
   I think it might be worth proposing a change to zstd-jni and then revisiting 
the use of streams at some point. I took a closer look and found that the 
current read loop in ZstdInputStream always consumes the frame format's block 
header size (3 bytes) and maximum block size (128K) from the inner stream when 
input needs to be read for decompression.
   
   For what it's worth, the snappy-java SnappyInputStream reads right up to its 
frame boundary as does the commons-compress FramedSnappyCompressorInputStream 
and BlockLZ4CompressorInputStream (and possibly others). I'd be happy to open a 
PR to add a stream based framed Snappy coder for use cases where the inner 
coder's output would be inconveniently large or exceed the maximum capacity of 
a byte array.
   
   The existing SnappyCoder also uses ByteArrayCoder, which certainly doesn't 
make it the best approach, but at least it's consistent and ZstdCoder can then 
be applied with the same gotchas as SnappyCoder.
   It does need more documentation to make that clear, so I'll revise that.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to