+0 for 1. delete 11 files
Better to add Start/End keys to DataMapRow also.
In my opinion, the union of Min/Max values and Start/End keys can work
better.
-
Best Regards
David Cai
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
+1, Good feature.
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
+1
Regards
Manish Gupta
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
HI
+1, agree to support standard spark file format interface in carbondata, it
will be significantly helpful for broadening apache carbondata's ecosystem.
Regards
Liang
ravipesala wrote
> Hi,
>
> Current Carbondata has deep integration with Spark to provide
> optimizations
> in performance
Hi All,
Since I read latest code of carbon and found that BTree related code is
only used by a test class called`BTreeBlockFinderTest`. So I try delete
those codes and test shows it works fine. But I wonder whether to delete
those code now or anyone thinks it can be used for something else ?
Hi All,
Currently, the filter queries on the streaming table always scan all
streaming files, even though there are no data in streaming files that meet
the filter conditions.
So I try to support file-level min/max index on streaming segment. It
helps to reduce the task number and improve