[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

2018-08-14 Thread xuchuanyin
Github user xuchuanyin commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2628#discussion_r209849039
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java
 ---
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.core.datastore.compression;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.nio.ByteBuffer;
+
+import org.apache.carbondata.common.logging.LogService;
+import org.apache.carbondata.common.logging.LogServiceFactory;
+
+import com.github.luben.zstd.Zstd;
+
+public class ZstdCompressor implements Compressor, Serializable {
+  private static final long serialVersionUID = 8181578747306832771L;
+  private static final LogService LOGGER =
+  LogServiceFactory.getLogService(ZstdCompressor.class.getName());
+  private static final int COMPRESS_LEVEL = 3;
+
+  public ZstdCompressor() {
+  }
+
+  @Override
+  public String getName() {
+return "zstd";
+  }
+
+  @Override
+  public byte[] compressByte(byte[] unCompInput) {
+return Zstd.compress(unCompInput, 3);
+  }
+
+  @Override
+  public byte[] compressByte(byte[] unCompInput, int byteSize) {
+return Zstd.compress(unCompInput, COMPRESS_LEVEL);
+  }
+
+  @Override
+  public byte[] unCompressByte(byte[] compInput) {
+long estimatedUncompressLength = Zstd.decompressedSize(compInput);
+return Zstd.decompress(compInput, (int) estimatedUncompressLength);
+  }
+
+  @Override
+  public byte[] unCompressByte(byte[] compInput, int offset, int length) {
+// todo: how to avoid memory copy
+byte[] dstBytes = new byte[length];
+System.arraycopy(compInput, offset, dstBytes, 0, length);
+return unCompressByte(dstBytes);
+  }
+
+  @Override
+  public byte[] compressShort(short[] unCompInput) {
+// short use 2 bytes
+byte[] unCompArray = new byte[unCompInput.length * 2];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (short input : unCompInput) {
+  unCompBuffer.putShort(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public short[] unCompressShort(byte[] compInput, int offset, int length) 
{
+byte[] unCompArray = unCompressByte(compInput, offset, length);
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+short[] shorts = new short[unCompArray.length / 2];
+for (int i = 0; i < shorts.length; i++) {
+  shorts[i] = unCompBuffer.getShort();
+}
+return shorts;
+  }
+
+  @Override
+  public byte[] compressInt(int[] unCompInput) {
+// int use 4 bytes
+byte[] unCompArray = new byte[unCompInput.length * 4];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (int input : unCompInput) {
+  unCompBuffer.putInt(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public int[] unCompressInt(byte[] compInput, int offset, int length) {
+byte[] unCompArray = unCompressByte(compInput, offset, length);
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+int[] ints = new int[unCompArray.length / 4];
+for (int i = 0; i < ints.length; i++) {
+  ints[i] = unCompBuffer.getInt();
+}
+return ints;
+  }
+
+  @Override
+  public byte[] compressLong(long[] unCompInput) {
+// long use 8 bytes
+byte[] unCompArray = new byte[unCompInput.length * 8];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (long input : unCompInput) {
+  unCompBuffer.putLong(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public long[] unCompressLong(byte[] compInput, 

[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

2018-08-13 Thread kevinjmh
Github user kevinjmh commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2628#discussion_r209545381
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java
 ---
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.core.datastore.compression;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.nio.ByteBuffer;
+
+import org.apache.carbondata.common.logging.LogService;
+import org.apache.carbondata.common.logging.LogServiceFactory;
+
+import com.github.luben.zstd.Zstd;
+
+public class ZstdCompressor implements Compressor, Serializable {
+  private static final long serialVersionUID = 8181578747306832771L;
+  private static final LogService LOGGER =
+  LogServiceFactory.getLogService(ZstdCompressor.class.getName());
+  private static final int COMPRESS_LEVEL = 3;
+
+  public ZstdCompressor() {
+  }
+
+  @Override
+  public String getName() {
+return "zstd";
+  }
+
+  @Override
+  public byte[] compressByte(byte[] unCompInput) {
+return Zstd.compress(unCompInput, 3);
+  }
+
+  @Override
+  public byte[] compressByte(byte[] unCompInput, int byteSize) {
+return Zstd.compress(unCompInput, COMPRESS_LEVEL);
+  }
+
+  @Override
+  public byte[] unCompressByte(byte[] compInput) {
+long estimatedUncompressLength = Zstd.decompressedSize(compInput);
+return Zstd.decompress(compInput, (int) estimatedUncompressLength);
+  }
+
+  @Override
+  public byte[] unCompressByte(byte[] compInput, int offset, int length) {
+// todo: how to avoid memory copy
+byte[] dstBytes = new byte[length];
+System.arraycopy(compInput, offset, dstBytes, 0, length);
+return unCompressByte(dstBytes);
+  }
+
+  @Override
+  public byte[] compressShort(short[] unCompInput) {
+// short use 2 bytes
+byte[] unCompArray = new byte[unCompInput.length * 2];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (short input : unCompInput) {
+  unCompBuffer.putShort(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public short[] unCompressShort(byte[] compInput, int offset, int length) 
{
+byte[] unCompArray = unCompressByte(compInput, offset, length);
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+short[] shorts = new short[unCompArray.length / 2];
+for (int i = 0; i < shorts.length; i++) {
+  shorts[i] = unCompBuffer.getShort();
+}
+return shorts;
+  }
+
+  @Override
+  public byte[] compressInt(int[] unCompInput) {
+// int use 4 bytes
+byte[] unCompArray = new byte[unCompInput.length * 4];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (int input : unCompInput) {
+  unCompBuffer.putInt(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public int[] unCompressInt(byte[] compInput, int offset, int length) {
+byte[] unCompArray = unCompressByte(compInput, offset, length);
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+int[] ints = new int[unCompArray.length / 4];
+for (int i = 0; i < ints.length; i++) {
+  ints[i] = unCompBuffer.getInt();
+}
+return ints;
+  }
+
+  @Override
+  public byte[] compressLong(long[] unCompInput) {
+// long use 8 bytes
+byte[] unCompArray = new byte[unCompInput.length * 8];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (long input : unCompInput) {
+  unCompBuffer.putLong(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public long[] unCompressLong(byte[] compInput, int 

[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

2018-08-13 Thread kevinjmh
Github user kevinjmh commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2628#discussion_r209543118
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java
 ---
@@ -0,0 +1,200 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.core.datastore.compression;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.nio.ByteBuffer;
+
+import org.apache.carbondata.common.logging.LogService;
+import org.apache.carbondata.common.logging.LogServiceFactory;
+
+import com.github.luben.zstd.Zstd;
+
+public class ZstdCompressor implements Compressor, Serializable {
+  private static final long serialVersionUID = 8181578747306832771L;
+  private static final LogService LOGGER =
+  LogServiceFactory.getLogService(ZstdCompressor.class.getName());
+  private static final int COMPRESS_LEVEL = 3;
+
+  public ZstdCompressor() {
+  }
+
+  @Override
+  public String getName() {
+return "zstd";
+  }
+
+  @Override
+  public byte[] compressByte(byte[] unCompInput) {
+return Zstd.compress(unCompInput, 3);
+  }
+
+  @Override
+  public byte[] compressByte(byte[] unCompInput, int byteSize) {
+return Zstd.compress(unCompInput, COMPRESS_LEVEL);
+  }
+
+  @Override
+  public byte[] unCompressByte(byte[] compInput) {
+long estimatedUncompressLength = Zstd.decompressedSize(compInput);
+return Zstd.decompress(compInput, (int) estimatedUncompressLength);
+  }
+
+  @Override
+  public byte[] unCompressByte(byte[] compInput, int offset, int length) {
+// todo: how to avoid memory copy
+byte[] dstBytes = new byte[length];
+System.arraycopy(compInput, offset, dstBytes, 0, length);
+return unCompressByte(dstBytes);
+  }
+
+  @Override
+  public byte[] compressShort(short[] unCompInput) {
+// short use 2 bytes
+byte[] unCompArray = new byte[unCompInput.length * 2];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (short input : unCompInput) {
+  unCompBuffer.putShort(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public short[] unCompressShort(byte[] compInput, int offset, int length) 
{
+byte[] unCompArray = unCompressByte(compInput, offset, length);
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+short[] shorts = new short[unCompArray.length / 2];
+for (int i = 0; i < shorts.length; i++) {
+  shorts[i] = unCompBuffer.getShort();
+}
+return shorts;
+  }
+
+  @Override
+  public byte[] compressInt(int[] unCompInput) {
+// int use 4 bytes
+byte[] unCompArray = new byte[unCompInput.length * 4];
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+for (int input : unCompInput) {
+  unCompBuffer.putInt(input);
+}
+return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL);
+  }
+
+  @Override
+  public int[] unCompressInt(byte[] compInput, int offset, int length) {
+byte[] unCompArray = unCompressByte(compInput, offset, length);
+ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray);
+int[] ints = new int[unCompArray.length / 4];
+for (int i = 0; i < ints.length; i++) {
+  ints[i] = unCompBuffer.getInt();
+}
+return ints;
+  }
--- End diff --

can try following code style to convert unCompress byte result to target 
datatype:
(take Int for example)
```
byte[] unCompArray = unCompressByte(compInput, offset, length);
IntBuffer buf = ByteBuffer.wrap(unCompArray).asIntBuffer();
int[] dest = new int[buf.remaining()];
buf.get(dest);
return dest;
```


---


[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

2018-08-13 Thread xuchuanyin
GitHub user xuchuanyin reopened a pull request:

https://github.com/apache/carbondata/pull/2628

[CARBONDATA-2851] Support zstd as column compressor in final store

1. add zstd compressor for compressing column data
2. add zstd support in thrift
3. legacy store is not considered in this commit
4. since zstd does not support zero-copy while compressing, offheap will
not take effect for zstd

A simple test with 1.2GB raw CSV data shows that the size (in MB) of final 
store with different compressor: 

| local dictionary | snappy | zstd | Size Reduced |
| --- | --- | --- | -- |
| local dict enabled | 335 | 207 | 38.2% |
| local dict disabled | 375 | 225 | 40% |

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [ ] Any interfaces changed?
 
 - [ ] Any backward compatibility impacted?
 
 - [ ] Document update required?

 - [ ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xuchuanyin/carbondata 
0810_support_zstd_compressor_final_store

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2628.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2628


commit bcd3d8f9c64f197668d46d29af1aa2ee2d956ceb
Author: xuchuanyin 
Date:   2018-08-10T14:02:57Z

Support zstd as column compressor in final store

1. add zstd compressor for compressing column data
2. add zstd support in thrift
3. legacy store is not considered in this commit
4. since zstd does not support zero-copy while compressing, offheap will
not take effect for zstd




---


[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...

2018-08-13 Thread xuchuanyin
Github user xuchuanyin closed the pull request at:

https://github.com/apache/carbondata/pull/2628


---