lukecwik commented on code in PR #24093:
URL: https://github.com/apache/beam/pull/24093#discussion_r1020434601


##########
sdks/java/core/src/main/java/org/apache/beam/sdk/coders/ZstdCoder.java:
##########
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.beam.sdk.coders;
+
+import com.github.luben.zstd.Zstd;
+import com.github.luben.zstd.ZstdCompressCtx;
+import com.github.luben.zstd.ZstdDecompressCtx;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.List;
+import javax.annotation.Nullable;
+import org.apache.beam.sdk.util.CoderUtils;
+import 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.collect.ImmutableList;
+
+/**
+ * Wraps an existing coder with Zstandard compression. It makes sense to use 
this coder when it's
+ * likely that the encoded value is quite large and compressible or when a 
dictionary is available
+ * to improve compression performance.
+ */
+public class ZstdCoder<T> extends StructuredCoder<T> {
+  private final Coder<T> innerCoder;
+  private final @Nullable byte[] dict;
+  private final int level;
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder, byte[] dict, int 
level) {
+    return new ZstdCoder<>(innerCoder, dict, level);
+  }
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder, byte[] dict) {
+    return new ZstdCoder<>(innerCoder, dict, Zstd.defaultCompressionLevel());
+  }
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder, int level) {
+    return new ZstdCoder<>(innerCoder, null, level);
+  }
+
+  /** Wraps the given coder into a {@link ZstdCoder}. */
+  public static <T> ZstdCoder<T> of(Coder<T> innerCoder) {
+    return new ZstdCoder<>(innerCoder, null, Zstd.defaultCompressionLevel());
+  }
+
+  private ZstdCoder(Coder<T> innerCoder, @Nullable byte[] dict, int level) {
+    this.innerCoder = innerCoder;
+    this.dict = dict;
+    this.level = level;
+  }
+
+  @Override
+  public void encode(T value, OutputStream os) throws IOException {
+    ZstdCompressCtx ctx = new ZstdCompressCtx();
+    try {
+      ctx.setLevel(level);
+      ctx.setMagicless(true); // No magic since we know this will be 
compressed data on decode.
+      ctx.setDictID(false); // No dict ID since we initialize the coder with 
the expected dict.
+      ctx.loadDict(dict);
+
+      byte[] encoded = CoderUtils.encodeToByteArray(innerCoder, value);

Review Comment:
   You would need to choose one of many methods to be able to figure out the 
length of the stream or terminate the stream. Some ideas for alternatives that 
work without needing to know the full length:
   1) Wrap with a block based encoding scheme where each block is [var int 
length][bytes] and if length is not the maximum the varint can encode then it 
is the last block.
   2) Use an escaping outputstream that uses an escape sequence to ensure that 
the end of the stream can be detected. See 
https://github.com/apache/beam/blob/master/runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/OrderedCode.java
 for an example
   
   Using ByteArrayCoder is working since it effectively inserts a length at the 
beginning and then encodes the bytes. This works for ByteArrayCoder since we 
know the length upfront. We can stick with what you got but know that you'll 
need to fit the entire encoded representation in memory regardless of what it 
might be and there are some encodings for types that don't fit in memory.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to