> 在 2017年5月18日,下午3:31,Kumar Vishal 写道:
>
> Hi Ravi,
>
> I have few queries related to both the solution.
>
> 1. To reduce the java heap for Btree , we can remove the Btree data
> structure and use simple single array and do binary search on it. And also
> we should
Github user asfgit closed the pull request at:
https://github.com/apache/carbondata-site/pull/40
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
sehriff created CARBONDATA-1067:
---
Summary: can't update delete successfully using
spark2.1+carbon1.1.0
Key: CARBONDATA-1067
URL: https://issues.apache.org/jira/browse/CARBONDATA-1067
Project:
Rahul Kumar created CARBONDATA-1066:
---
Summary: measure shouldn't supported in no_inverted_index
Key: CARBONDATA-1066
URL: https://issues.apache.org/jira/browse/CARBONDATA-1066
Project: CarbonData
GitHub user PallaviSingh1992 opened a pull request:
https://github.com/apache/carbondata-site/pull/40
Add details for CarbonData 1.1.0 Release
You can merge this pull request into a Git repository by running:
$ git pull
Ravindra Pesala created CARBONDATA-1065:
---
Summary: Implement set command in carbon to update carbon
properties dynamically
Key: CARBONDATA-1065
URL: https://issues.apache.org/jira/browse/CARBONDATA-1065
Congrats David.
-Regards
Kumar Vishal
On Thu, May 18, 2017 at 12:14 PM, praveen adlakha wrote:
> Congrats Cai Qiang ...
>
> On Thu, May 18, 2017 at 11:11 AM, Gururaj Shetty >
> wrote:
>
> > Congratulations Cai Qiang
> >
> > Regards,
> >
Hi Ravi,
I have few queries related to both the solution.
1. To reduce the java heap for Btree , we can remove the Btree data
structure and use simple single array and do binary search on it. And also
we should move this cached arrays to unsafe (offheap/onheap) to reduce the
burden on GC.
Hi Liang,
Yes Liang , it will be done in 2 parts. At first reduce the size of the
btree and then merge the driver side and executor btree to single btree.
Regards,
Ravindra.
On 17 May 2017 at 19:28, Liang Chen wrote:
> Hi Ravi
>
> Thank you bringing this improvement
Hi Jacky,
Default blocklet size currently is 64 MB, so if block size is 256 MB then
at most blocklets per block is 4.
Regards,
Ravindra.
On 17 May 2017 at 19:59, Jacky Li wrote:
> +1 for both proposal 1 & 2,
>
> For point 2, do you have idea how many blocklet within one
i'm using
spark2.1+carbon1.1(https://github.com/apache/carbondata/tree/apache-carbondata-1.1.0),and
get class not found exception.and this class only exists in spark1.X import by
CodeGenFactory.scala
build jar using :mvn package -DskipTests -Pspark-2.1 -Dspark.version=2.1.0
-Phadoop-2.7.2
11 matches
Mail list logo