Github user Vimal-Das commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/231#discussion_r83780338
--- Diff:
processing/src/main/java/org/apache/carbondata/processing/store/writer/CarbonFactDataWriterImplForIntIndexAndAggBlock.java
---
@@
The key in the map can be only primitive data types. At present, Carbon
Data supports following primitive data types Integer, String, Timestamp,
Double and Decimal.
If in future CarbonData adds supports more primitive data types, the same
can be used as key in the Map.
The reason for restricting
Github user asfgit closed the pull request at:
https://github.com/apache/incubator-carbondata/pull/237
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user asfgit closed the pull request at:
https://github.com/apache/incubator-carbondata/pull/241
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user asfgit closed the pull request at:
https://github.com/apache/incubator-carbondata/pull/245
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Hi Ravi,
I took a quick look at Hazlecast, what they offer is a distributed map across
cluster (on any single node only portion of the map is stored), to facilitate
parallel data loading I think we need a complete copy on each node, is this the
structure we are looking for?
it does allow map
Github user asfgit closed the pull request at:
https://github.com/apache/incubator-carbondata/pull/227
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user sujith71955 opened a pull request:
https://github.com/apache/incubator-carbondata/pull/246
[CARBONDATA-321] If user changes the blocklet size the queries will bâ¦
**Problem:**
If user changes the blocklet size (50) the queries will be failed.
Currently byte size
GitHub user ravikiran23 opened a pull request:
https://github.com/apache/incubator-carbondata/pull/245
[CARBONDATA-320] problem during drop of table when all datanodes are down.
Problem :
when all the data nodes are down and user executes drop table. then drop
table will
ravikiran created CARBONDATA-320:
Summary: problem when dropped a table during all data nodes are
down.
Key: CARBONDATA-320
URL: https://issues.apache.org/jira/browse/CARBONDATA-320
Project:
trim the data when data loading.
2016-10-17 16:22 GMT+08:00 Ravindra Pesala :
> Hi Lionx,
>
> Can you give more details on this feature?
> Are you talking about trim() function while querying? Or trim the data
> while loading to carbon?
>
> Regards,
> Ravi.
>
> On 17
Hi Lionx,
Can you give more details on this feature?
Are you talking about trim() function while querying? Or trim the data
while loading to carbon?
Regards,
Ravi.
On 17 October 2016 at 12:56, 向志强 wrote:
> Hi all,
> We are trying to support string trim feature in
Github user gvramana commented on a diff in the pull request:
https://github.com/apache/incubator-carbondata/pull/237#discussion_r83584249
--- Diff:
integration/spark/src/main/scala/org/apache/carbondata/spark/csv/CarbonCsvRelation.scala
---
@@ -148,6 +150,10 @@ case class
13 matches
Mail list logo