Github user QiangCai commented on the issue:
https://github.com/apache/carbondata/pull/2670
@xuchuanyin
Yes, better to check the limitation.
What's your opinion about how to fix it?
---
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2670
Build Success with Spark 2.2.1, Please check CI
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/175/
---
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2670
Build Success with Spark 2.1.0, Please check CI
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8246/
---
Github user xuchuanyin commented on the issue:
https://github.com/apache/carbondata/pull/2670
Problems that may be ignored are that during loading
1. We use a buffer to store one row and the row is 2MB fow now
2. For a column page, we compress it as a byte array and its length
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2670
Build Success with Spark 2.2.1, Please check CI
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/73/
---
Github user CarbonDataQA commented on the issue:
https://github.com/apache/carbondata/pull/2670
Build Success with Spark 2.1.0, Please check CI
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/8144/
---
Github user ravipesala commented on the issue:
https://github.com/apache/carbondata/pull/2670
SDV Build Fail , Please check CI
http://144.76.159.231:8080/job/ApacheSDVTests/6458/
---