[GitHub] carbondata issue #2633: [WIP] Handle clearing memory only in case of failure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2633 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7903/ ---
[GitHub] carbondata issue #2615: [HOTFIX] [presto] presto integration code cleanup
Github user ajantha-bhat commented on the issue: https://github.com/apache/carbondata/pull/2615 @bhavya411 : please review ---
[GitHub] carbondata issue #2634: [CARBONDATA-2854] Release table status file lock bef...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2634 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6260/ ---
[GitHub] carbondata pull request #2634: [CARBONDATA-2854] Release table status file l...
GitHub user zzcclp opened a pull request: https://github.com/apache/carbondata/pull/2634 [CARBONDATA-2854] Release table status file lock before delete physical files Release table status file lock before delete physical files, otherwise table status file will be locked during deleting physical files, it may take a long time, other operations will fail to get table status file lock. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zzcclp/carbondata CARBONDATA-2854 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2634.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2634 commit 7b26387c77646eeb013301e28eecf93d93a6c0a3 Author: Zhang Zhichao <441586683@...> Date: 2018-08-14T05:17:21Z [CARBONDATA-2854] Release table status file lock before delete physical files when execute 'clean files' command Release table status file lock before delete physical files, otherwise table status file will be locked during deleting physical files, it may take a long time, other operations will fail to get table status file lock. ---
[GitHub] carbondata issue #2633: [WIP] Handle clearing memory only in case of failure...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2633 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6259/ ---
[GitHub] carbondata issue #2633: [WIP] Handle clearing memory only in case of failure...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2633 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6258/ ---
[GitHub] carbondata issue #2619: [CARBONDATA-2819] Fixed cannot drop preagg datamap o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2619 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6625/ ---
[GitHub] carbondata issue #2619: [CARBONDATA-2819] Fixed cannot drop preagg datamap o...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2619 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7901/ ---
[GitHub] carbondata issue #2619: [CARBONDATA-2819] Fixed cannot drop preagg datamap o...
Github user Sssan520 commented on the issue: https://github.com/apache/carbondata/pull/2619 retest this please ---
[GitHub] carbondata pull request #2391: [CARBONDATA-2625] Optimize the performance of...
Github user xubo245 closed the pull request at: https://github.com/apache/carbondata/pull/2391 ---
[GitHub] carbondata issue #2391: [CARBONDATA-2625] Optimize the performance of Carbon...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2391 @sraghunandan ok ---
[jira] [Created] (CARBONDATA-2854) Release table status file lock before delete physical files when execute 'clean files' command
Zhichao Zhang created CARBONDATA-2854: -- Summary: Release table status file lock before delete physical files when execute 'clean files' command Key: CARBONDATA-2854 URL: https://issues.apache.org/jira/browse/CARBONDATA-2854 Project: CarbonData Issue Type: Bug Components: spark-integration Affects Versions: 1.4.0, 1.5.0 Reporter: Zhichao Zhang Assignee: Zhichao Zhang Fix For: 1.5.0 Release table status file lock before delete physical files when execute 'clean files' command, otherwise table status file will be locked during deleting physical files, it may take a long time, other operations will fail to get table status file lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2633: [WIP] Handle clearing memory only in case of failure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2633 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6624/ ---
[GitHub] carbondata issue #2633: [WIP] Handle clearing memory only in case of failure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2633 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7900/ ---
[GitHub] carbondata issue #2633: [WIP] Handle clearing memory only in case of failure...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2633 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6257/ ---
[GitHub] carbondata pull request #2633: [WIP] Handle clearing memory only in case of ...
GitHub user dhatchayani opened a pull request: https://github.com/apache/carbondata/pull/2633 [WIP] Handle clearing memory only in case of failures, in case of success/completion system should take care internally Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/dhatchayani/carbondata ListenersRDD Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2633.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2633 commit 2a25416f6a29e9edb5f8e08f4c8ce0194d5625b0 Author: dhatchayani Date: 2018-08-13T11:29:06Z [WIP] Handle clearing memory only in case of failures, in case of success/completion system should take care internally ---
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2632 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7899/ ---
[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...
Github user kevinjmh commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2628#discussion_r209545381 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java --- @@ -0,0 +1,200 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.core.datastore.compression; + +import java.io.IOException; +import java.io.Serializable; +import java.nio.ByteBuffer; + +import org.apache.carbondata.common.logging.LogService; +import org.apache.carbondata.common.logging.LogServiceFactory; + +import com.github.luben.zstd.Zstd; + +public class ZstdCompressor implements Compressor, Serializable { + private static final long serialVersionUID = 8181578747306832771L; + private static final LogService LOGGER = + LogServiceFactory.getLogService(ZstdCompressor.class.getName()); + private static final int COMPRESS_LEVEL = 3; + + public ZstdCompressor() { + } + + @Override + public String getName() { +return "zstd"; + } + + @Override + public byte[] compressByte(byte[] unCompInput) { +return Zstd.compress(unCompInput, 3); + } + + @Override + public byte[] compressByte(byte[] unCompInput, int byteSize) { +return Zstd.compress(unCompInput, COMPRESS_LEVEL); + } + + @Override + public byte[] unCompressByte(byte[] compInput) { +long estimatedUncompressLength = Zstd.decompressedSize(compInput); +return Zstd.decompress(compInput, (int) estimatedUncompressLength); + } + + @Override + public byte[] unCompressByte(byte[] compInput, int offset, int length) { +// todo: how to avoid memory copy +byte[] dstBytes = new byte[length]; +System.arraycopy(compInput, offset, dstBytes, 0, length); +return unCompressByte(dstBytes); + } + + @Override + public byte[] compressShort(short[] unCompInput) { +// short use 2 bytes +byte[] unCompArray = new byte[unCompInput.length * 2]; +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +for (short input : unCompInput) { + unCompBuffer.putShort(input); +} +return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL); + } + + @Override + public short[] unCompressShort(byte[] compInput, int offset, int length) { +byte[] unCompArray = unCompressByte(compInput, offset, length); +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +short[] shorts = new short[unCompArray.length / 2]; +for (int i = 0; i < shorts.length; i++) { + shorts[i] = unCompBuffer.getShort(); +} +return shorts; + } + + @Override + public byte[] compressInt(int[] unCompInput) { +// int use 4 bytes +byte[] unCompArray = new byte[unCompInput.length * 4]; +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +for (int input : unCompInput) { + unCompBuffer.putInt(input); +} +return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL); + } + + @Override + public int[] unCompressInt(byte[] compInput, int offset, int length) { +byte[] unCompArray = unCompressByte(compInput, offset, length); +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +int[] ints = new int[unCompArray.length / 4]; +for (int i = 0; i < ints.length; i++) { + ints[i] = unCompBuffer.getInt(); +} +return ints; + } + + @Override + public byte[] compressLong(long[] unCompInput) { +// long use 8 bytes +byte[] unCompArray = new byte[unCompInput.length * 8]; +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +for (long input : unCompInput) { + unCompBuffer.putLong(input); +} +return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL); + } + + @Override + public long[] unCompressLong(byte[] compInput, int
[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...
Github user kevinjmh commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2628#discussion_r209543118 --- Diff: core/src/main/java/org/apache/carbondata/core/datastore/compression/ZstdCompressor.java --- @@ -0,0 +1,200 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.carbondata.core.datastore.compression; + +import java.io.IOException; +import java.io.Serializable; +import java.nio.ByteBuffer; + +import org.apache.carbondata.common.logging.LogService; +import org.apache.carbondata.common.logging.LogServiceFactory; + +import com.github.luben.zstd.Zstd; + +public class ZstdCompressor implements Compressor, Serializable { + private static final long serialVersionUID = 8181578747306832771L; + private static final LogService LOGGER = + LogServiceFactory.getLogService(ZstdCompressor.class.getName()); + private static final int COMPRESS_LEVEL = 3; + + public ZstdCompressor() { + } + + @Override + public String getName() { +return "zstd"; + } + + @Override + public byte[] compressByte(byte[] unCompInput) { +return Zstd.compress(unCompInput, 3); + } + + @Override + public byte[] compressByte(byte[] unCompInput, int byteSize) { +return Zstd.compress(unCompInput, COMPRESS_LEVEL); + } + + @Override + public byte[] unCompressByte(byte[] compInput) { +long estimatedUncompressLength = Zstd.decompressedSize(compInput); +return Zstd.decompress(compInput, (int) estimatedUncompressLength); + } + + @Override + public byte[] unCompressByte(byte[] compInput, int offset, int length) { +// todo: how to avoid memory copy +byte[] dstBytes = new byte[length]; +System.arraycopy(compInput, offset, dstBytes, 0, length); +return unCompressByte(dstBytes); + } + + @Override + public byte[] compressShort(short[] unCompInput) { +// short use 2 bytes +byte[] unCompArray = new byte[unCompInput.length * 2]; +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +for (short input : unCompInput) { + unCompBuffer.putShort(input); +} +return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL); + } + + @Override + public short[] unCompressShort(byte[] compInput, int offset, int length) { +byte[] unCompArray = unCompressByte(compInput, offset, length); +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +short[] shorts = new short[unCompArray.length / 2]; +for (int i = 0; i < shorts.length; i++) { + shorts[i] = unCompBuffer.getShort(); +} +return shorts; + } + + @Override + public byte[] compressInt(int[] unCompInput) { +// int use 4 bytes +byte[] unCompArray = new byte[unCompInput.length * 4]; +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +for (int input : unCompInput) { + unCompBuffer.putInt(input); +} +return Zstd.compress(unCompBuffer.array(), COMPRESS_LEVEL); + } + + @Override + public int[] unCompressInt(byte[] compInput, int offset, int length) { +byte[] unCompArray = unCompressByte(compInput, offset, length); +ByteBuffer unCompBuffer = ByteBuffer.wrap(unCompArray); +int[] ints = new int[unCompArray.length / 4]; +for (int i = 0; i < ints.length; i++) { + ints[i] = unCompBuffer.getInt(); +} +return ints; + } --- End diff -- can try following code style to convert unCompress byte result to target datatype: (take Int for example) ``` byte[] unCompArray = unCompressByte(compInput, offset, length); IntBuffer buf = ByteBuffer.wrap(unCompArray).asIntBuffer(); int[] dest = new int[buf.remaining()]; buf.get(dest); return dest; ``` ---
[GitHub] carbondata issue #2628: [CARBONDATA-2851] Support zstd as column compressor ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2628 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6621/ ---
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2632 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6256/ ---
[GitHub] carbondata issue #2607: [CARBONDATA-2818] Presto Upgrade to 0.206
Github user chenliang613 commented on the issue: https://github.com/apache/carbondata/pull/2607 @bhavya411 I tested this pr, the performance(simple aggregation) not getting any improvement(0.206 compare to 0.187) Just i checked 0.207 and 0.208, there are fixed many memory issues, so propose to upgrade to 0.208 for CarbonData integration. ---
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2632 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6623/ ---
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2632 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6255/ ---
[GitHub] carbondata issue #2628: [CARBONDATA-2851] Support zstd as column compressor ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2628 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7897/ ---
[jira] [Created] (CARBONDATA-2853) Add file-level min/max index for streaming segment
QiangCai created CARBONDATA-2853: Summary: Add file-level min/max index for streaming segment Key: CARBONDATA-2853 URL: https://issues.apache.org/jira/browse/CARBONDATA-2853 Project: CarbonData Issue Type: Sub-task Affects Versions: 1.5.0 Reporter: QiangCai -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (CARBONDATA-2853) Add file-level min/max index for streaming segment
[ https://issues.apache.org/jira/browse/CARBONDATA-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] QiangCai reassigned CARBONDATA-2853: Assignee: QiangCai > Add file-level min/max index for streaming segment > -- > > Key: CARBONDATA-2853 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2853 > Project: CarbonData > Issue Type: Sub-task >Affects Versions: 1.5.0 >Reporter: QiangCai >Assignee: QiangCai >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2632 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6620/ ---
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2632 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7896/ ---
[jira] [Created] (CARBONDATA-2852) support zstd on legacy store
xuchuanyin created CARBONDATA-2852: -- Summary: support zstd on legacy store Key: CARBONDATA-2852 URL: https://issues.apache.org/jira/browse/CARBONDATA-2852 Project: CarbonData Issue Type: Sub-task Reporter: xuchuanyin Assignee: xuchuanyin Currently carbondata reads the column compressor from system property. This will cause problems on legacy store if we have changed the compressor. It should read that information from metadata in data files. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (CARBONDATA-2850) Support zstd as column compressor in final store
[ https://issues.apache.org/jira/browse/CARBONDATA-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xuchuanyin updated CARBONDATA-2850: --- Description: ZSTD has a better compression ratio that snappy and the compress/decompress rate is acceptable compared with snappy. After we introduce zstd as the column compressor, the size of carbondata final store will be reduced. > Support zstd as column compressor in final store > > > Key: CARBONDATA-2850 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2850 > Project: CarbonData > Issue Type: Improvement >Reporter: xuchuanyin >Assignee: xuchuanyin >Priority: Major > > ZSTD has a better compression ratio that snappy and the compress/decompress > rate is acceptable compared with snappy. > After we introduce zstd as the column compressor, the size of carbondata > final store will be reduced. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CARBONDATA-2851) support zstd as column compressor
xuchuanyin created CARBONDATA-2851: -- Summary: support zstd as column compressor Key: CARBONDATA-2851 URL: https://issues.apache.org/jira/browse/CARBONDATA-2851 Project: CarbonData Issue Type: Sub-task Reporter: xuchuanyin Assignee: xuchuanyin -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...
GitHub user xuchuanyin reopened a pull request: https://github.com/apache/carbondata/pull/2628 [CARBONDATA-2851] Support zstd as column compressor in final store 1. add zstd compressor for compressing column data 2. add zstd support in thrift 3. legacy store is not considered in this commit 4. since zstd does not support zero-copy while compressing, offheap will not take effect for zstd A simple test with 1.2GB raw CSV data shows that the size (in MB) of final store with different compressor: | local dictionary | snappy | zstd | Size Reduced | | --- | --- | --- | -- | | local dict enabled | 335 | 207 | 38.2% | | local dict disabled | 375 | 225 | 40% | Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/xuchuanyin/carbondata 0810_support_zstd_compressor_final_store Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2628.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2628 commit bcd3d8f9c64f197668d46d29af1aa2ee2d956ceb Author: xuchuanyin Date: 2018-08-10T14:02:57Z Support zstd as column compressor in final store 1. add zstd compressor for compressing column data 2. add zstd support in thrift 3. legacy store is not considered in this commit 4. since zstd does not support zero-copy while compressing, offheap will not take effect for zstd ---
[GitHub] carbondata pull request #2628: [CARBONDATA-2851] Support zstd as column comp...
Github user xuchuanyin closed the pull request at: https://github.com/apache/carbondata/pull/2628 ---
[jira] [Created] (CARBONDATA-2850) Support zstd as column compressor in final store
xuchuanyin created CARBONDATA-2850: -- Summary: Support zstd as column compressor in final store Key: CARBONDATA-2850 URL: https://issues.apache.org/jira/browse/CARBONDATA-2850 Project: CarbonData Issue Type: Improvement Reporter: xuchuanyin Assignee: xuchuanyin -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2628 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6254/ ---
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2632 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6253/ ---
[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2628 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/6618/ ---
[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2628 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/7894/ ---
[GitHub] carbondata issue #2632: [CARBONDATA-2206] Enhanced document on Lucene datama...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/2632 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/6252/ ---
[GitHub] carbondata pull request #2632: [CARBONDATA-2206] Enhanced document on Lucene...
GitHub user praveenmeenakshi56 opened a pull request: https://github.com/apache/carbondata/pull/2632 [CARBONDATA-2206] Enhanced document on Lucene datamap Support Enhanced documentation of Lucene DataMap - [ ] Any interfaces changed? NA - [ ] Any backward compatibility impacted? NA - [ ] Document update required? Document Updated - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. NA - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/praveenmeenakshi56/carbondata lucene_doc Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2632.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2632 commit 15f2929f0dafbf7b7d7f8a62c5ae9d2f66955528 Author: praveenmeenakshi56 Date: 2018-08-13T07:15:02Z Updated document on Lucene datamap Support ---
[GitHub] carbondata pull request #2423: [CARBONDATA-2530][MV] Fix wrong data displaye...
Github user xubo245 closed the pull request at: https://github.com/apache/carbondata/pull/2423 ---
[GitHub] carbondata issue #2423: [CARBONDATA-2530][MV] Fix wrong data displayed when ...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2423 @ravipesala ok ---
[GitHub] carbondata issue #2628: WIP: Support zstd as column compressor in final stor...
Github user brijoobopanna commented on the issue: https://github.com/apache/carbondata/pull/2628 retest this please ---