I agree with Ravindra and I can try to fix it (I have mentioned in a PR
review comment).
-
Best Regards
David Cai
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Hi,
*I was able to decrease the memory usage in TLAB from 68GB to 29.94 GB for
the same TPCH data* *without disabling adaptive encoding*.
*There is about 5% improvement in insert also*. Please check the PR.
https://github.com/apache/carbondata/pull/3682
Before the change:
[image: Screenshot
Hi Anantha,
I think it is better to fix the problem instead of disabling the things. It
is already observed that store size increases proportionally. If my data
has more columns then it will be exponential. Store size directly impacts
the query performance in object store world. It is better to
Hi Ravi, please find the performance readings below.
On TPCH 10GB data, carbon to carbon insert in on HDFS standalone cluster:
*By disabling adaptive encoding for float and double.*
insert is *more than 10% faster* [before 139 seconds, after this it is 114
seconds] and
*saves 25% memory in
Hi ,
It increases the store size. Can you give me performance figures with and
without these changes. And also provide how much store size impact if we
disable it.
Regards,
Ravindra.
On Wed, 25 Mar 2020 at 1:51 PM, Ajantha Bhat wrote:
> Hi all,
>
> I have done insert into flow profiling