Github user kumarvishal09 commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@watermen you can move all the schema related properties to some wrapper
class and in wrapper class implement hash code and equals based on dimension
column(including complex
Github user watermen commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@kumarvishal09 I agree with your idea. And I think we should remain a
static HashMap, which key is (This is my first
Github user kumarvishal09 commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@watermen @QiangCai
1. In driver side segment is loaded based on taskid for each segment
...here we across task id for same segment we can create load only one segment
Github user watermen commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@QiangCai When we query the big table, the memory pressure in driver side
is greater than executor side. So I think we can first reuse the segment
properties in driver side in this
Github user QiangCai commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@kumarvishal09 dump picture is driver tree.
@watermen this pr only implement to reuse segment properties in driver
side. can you try to do it in executor side? About the building
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
Build Success with Spark 1.6.2, Please check CI
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1359/
---
If your project is set up for it, you can reply to this email and
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
Build Failed with Spark 1.6.2, Please check CI
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1356/
---
If your project is set up for it, you can reply to this email and
Github user watermen commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@kumarvishal09
1. In the driver side, one segment has n(the number of nodes)
SegmentProperties objects. You can see the `SegmentTaskIndexStore.loadBlocks`
or ask @QiangCai for
Github user kumarvishal09 commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@watermen Attached heap dump is of executor side btree or driver side?
1. Because in driver side btree is loaded based on segment and one segment
will have only one segment
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
Build Failed with Spark 1.6.2, Please check CI
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1351/
---
If your project is set up for it, you can reply to this email and
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
Build Success with Spark 1.6.2, Please check CI
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1335/
---
If your project is set up for it, you can reply to this email and
Github user CarbonDataQA commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
Build Failed with Spark 1.6.2, Please check CI
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1334/
---
If your project is set up for it, you can reply to this email and
Github user watermen commented on the issue:
https://github.com/apache/incubator-carbondata/pull/659
@jackylk @QiangCai I have already modified code with "Store one
SegmentProperties object each segment" solution.
---
If your project is set up for it, you can reply to this email and
13 matches
Mail list logo