apurtell commented on a change in pull request #3691:
URL: https://github.com/apache/hbase/pull/3691#discussion_r718668654



##########
File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/compress/Compression.java
##########
@@ -470,4 +551,41 @@ public static void decompress(ByteBuff dest, InputStream 
bufferedBoundedStream,
       }
     }
   }
+
+  public static CompressionCodec buildCodec(final Configuration conf, final 
Algorithm algo) {
+    try {
+      String codecClassName = conf.get(algo.confKey, algo.confDefault);
+      if (codecClassName == null) {
+        throw new RuntimeException("No codec configured for " + algo.confKey);
+      }
+      Class<?> codecClass = getClassLoaderForCodec().loadClass(codecClassName);
+      CompressionCodec codec = (CompressionCodec) 
ReflectionUtils.newInstance(codecClass,
+          new Configuration(conf));
+      LOG.info("Loaded codec {} for compression algorithm {}",
+        codec.getClass().getCanonicalName(), algo.name());
+      return codec;
+    } catch (ClassNotFoundException e) {
+      throw new RuntimeException(e);
+    }
+  }
+
+  public static void main(String[] args) throws Exception {
+    Configuration conf = HBaseConfiguration.create();
+    java.util.Map<String, CompressionCodec> implMap = new 
java.util.HashMap<>();
+    for (Algorithm algo: Algorithm.class.getEnumConstants()) {
+      try {
+        implMap.put(algo.name(), algo.getCodec(conf));
+      } catch (Exception e) { }

Review comment:
       No it is not. This is how we ignore codec load failures. Again, this is 
just test code, a debug dump. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to