zhangboren93 opened a new issue, #4281:
URL: https://github.com/apache/carbondata/issues/4281
I found that reading carbon files from CarbonReader takes long time in
"SimpleDateFormat.", see attached file for output of
profiling.
https://github.com/apache/carbondata/blob/4b8846d
Black-max12138 opened a new issue, #4275:
URL: https://github.com/apache/carbondata/issues/4275
Look at this line of code. `boolean hasNext = currentReader.nextKeyValue();`
If hasNext returns false and currentReader is not the last one, it indicates
that the iterator exits and subsequent
liutaobigdata opened a new issue #4249:
URL: https://github.com/apache/carbondata/issues/4249
when I change hadoop version from 2.7.2 to 3.0.0 while compileing the
source the error is coming : Failed to execute goal
org.codehaus.mojo:findbugs-maven-plugin:3.0.4:check (analyze-compile)
XinyuZeng opened a new issue #4247:
URL: https://github.com/apache/carbondata/issues/4247
For example, figures in
https://cwiki.apache.org/confluence/display/CARBONDATA/Unique+Data+Organization
are broken. Could you fix the issue?
--
This is an automated message from the Apache Git Serv
chenliang613 commented on issue #4236:
URL: https://github.com/apache/carbondata/issues/4236#issuecomment-972810564
Will close this issue.
Please don't create recruitment advertisement issue.
--
This is an automated message from the Apache Git Service.
To respond to the message, pleas
yuth opened a new issue #4236:
URL: https://github.com/apache/carbondata/issues/4236
你好,
我是字节跳动Data&AI-lab&搜索的HR,负责大数据方向的岗位招聘。目前我们正在招募组建Flink/Presto/Spark/Hudi/Iceberg等这几个方向的大数据引擎团队,负责人和工程师的需求都有,北/上/杭base地点灵活。请问您有兴趣聊聊吗?
期待回复,谢谢!
联系人:陈凌薇 +86-15268606705
Hello, my
maheshrajus edited a comment on issue #4181:
URL: https://github.com/apache/carbondata/issues/4181#issuecomment-939908981
Hi,
Merge into support is added as part of below PR(1).
Please check the below guide(2) about merge into operations.
You can refer below test cases(3).
maheshrajus commented on issue #4181:
URL: https://github.com/apache/carbondata/issues/4181#issuecomment-939908981
Hi,
Merge into support is added as part of below PR(#1).
Please check the below guide(#2) about merge into operations.
You can refer below test cases(#3).
Car
Lior-AI commented on issue #4212:
URL: https://github.com/apache/carbondata/issues/4212#issuecomment-939502383
1.No,
This are the logs:
```
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/mnt/yarn/usercache/livy/filecache/48/__spark_lib
maheshrajus commented on issue #4223:
URL: https://github.com/apache/carbondata/issues/4223#issuecomment-928846480
@tsinan
1)Create lucene index is not support from presto
2) Read lucene index support is there [You need to create lucene index from
spark]
--
This is an automated me
maheshrajus commented on issue #4223:
URL: https://github.com/apache/carbondata/issues/4223#issuecomment-928846480
@tsinan
1)Create lucene index is not support from presto
2) Read lucene index support is there [You need to create lucene index from
spark]
--
This is an automated me
Indhumathi27 commented on issue #4212:
URL: https://github.com/apache/carbondata/issues/4212#issuecomment-927895361
Hi,
Please check the following.
If any exception occurred during insert ? because the segment here is Marked
for Delete
If scenario works fine with non-partition
tsinan opened a new issue #4223:
URL: https://github.com/apache/carbondata/issues/4223
When use prestosql to query carbondata, the lucene index can be used to
prune blocklet? (Like 'TEXT_MATCH')
Thanks.
--
This is an automated message from the Apache Git Service.
To respond to the
Lior-AI commented on issue #4206:
URL: https://github.com/apache/carbondata/issues/4206#issuecomment-911815814
Solved in
https://github.com/apache/carbondata/commit/42f69827e0a577b6128417104c0a49cd5bf21ad7
but now there is a different problem :
https://github.com/apache/carbondata/issue
Lior-AI closed issue #4206:
URL: https://github.com/apache/carbondata/issues/4206
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr.
Lior-AI opened a new issue #4212:
URL: https://github.com/apache/carbondata/issues/4212
After
https://github.com/apache/carbondata/commit/42f69827e0a577b6128417104c0a49cd5bf21ad7
I have successfully created a table with partitions, but when I trying
insert data the job end with a succes
nihal0107 commented on issue #4206:
URL: https://github.com/apache/carbondata/issues/4206#issuecomment-903767216
Hi, As you can see the error message is `partition is not supported for
external table`.
Whenever you create a table with location then it will be an external table
and we ar
Lior-AI opened a new issue #4206:
URL: https://github.com/apache/carbondata/issues/4206
I am running spark in EMR
> Release label:emr-5.24.1
Hadoop distribution:Amazon 2.8.5
Applications:
Hive 2.3.4, Pig 0.17.0, Hue 4.4.0, Flink 1.8.0, Spark 2.4.2, Presto
0.219, Jupyter
brijoobopanna commented on issue #4178:
URL: https://github.com/apache/carbondata/issues/4178#issuecomment-892560119
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsu
study-day commented on issue #4178:
URL: https://github.com/apache/carbondata/issues/4178#issuecomment-893095394
thanks ,Can I use sql to write merge into syntax?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
brijoobopanna commented on issue #4178:
URL: https://github.com/apache/carbondata/issues/4178#issuecomment-893295027
yes plz check examples here
examples/spark/src/main/scala/org/apache/carbondata/examples/DataMergeIntoExample.scala
--
This is an automated message from the Apache Git
study-day commented on issue #4178:
URL: https://github.com/apache/carbondata/issues/4178#issuecomment-893095394
thanks ,Can I use sql to write merge into syntax?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
brijoobopanna commented on issue #4178:
URL: https://github.com/apache/carbondata/issues/4178#issuecomment-892560119
please check if below can help
https://github.com/apache/carbondata/blob/master/examples/spark/src/main/scala/org/apache/carbondata/examples/CDCExample.scala
--
This is
didiaode18 commented on issue #4182:
URL: https://github.com/apache/carbondata/issues/4182#issuecomment-889001315
+1
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To uns
czy006 closed issue #4184:
URL: https://github.com/apache/carbondata/issues/4184
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubscr..
ajantha-bhat commented on issue #4184:
URL: https://github.com/apache/carbondata/issues/4184#issuecomment-887262090
@czy006 : Hi, can you use spark2.3 profile instead of 2.4 ? 2.4 brings
hadoop3 dependencies which doesn't work well with presto333.
Also remove Dhadoop, Dhive version and
czy006 opened a new issue #4184:
URL: https://github.com/apache/carbondata/issues/4184
@ajantha-bhat hello,I always build fail for your presto version about 333,I
don't know what's problem,it must be use jdk11 to build it ? My mvn build
command is my build command is mvn -DskipTests -Pspar
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-886418870
thanks thanks thanks
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specifi
study-day closed issue #4173:
URL: https://github.com/apache/carbondata/issues/4173
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubsc
nihal0107 commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-886379874
There is some ten valid segment status. You can refer to the file
`SegmentStatus.java`.
Once we trigger the load and if the load will be success then segment status
will be
study-day commented on issue #4182:
URL: https://github.com/apache/carbondata/issues/4182#issuecomment-886306886
hi ,kongxianghe, We have also found a similar problem. If two tables are
join, it will be very time-consuming if there is no de-duplication. And spark
only uses a few executors.
kongxianghe1234 commented on issue #4182:
URL: https://github.com/apache/carbondata/issues/4182#issuecomment-885997419
also add "spark.shuffle.statistics.verbose=true",still no use for skewed
join
--
This is an automated message from the Apache Git Service.
To respond to the message, pl
kongxianghe1234 opened a new issue #4182:
URL: https://github.com/apache/carbondata/issues/4182
spark.sql.adaptive.enabled=true
spark.sql.adaptive.skewedJoin.enabled=true
spark.sql.adaptive.skewedPartitionMaxSplits=5
spark.sql.adaptive.skewedPartitionRowCountThreshold=1000
sp
study-day removed a comment on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-885533091
thanks .
what does it mean about 'Compacted' 'Success', the Status has how many
types ?
--
This is an automated message from the Apache Git Service.
To respond
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-885533091
thanks .
what does it mean about 'Compacted' 'Success', the Status has how many
types ?
--
This is an automated message from the Apache Git Service.
To respond to the m
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-885531971
thanks .
what does it mean about 'Compacted' 'Success', the Status has how many
types ?
--
This is an automated message from the Apache Git Service.
To respond to the m
nihal0107 edited a comment on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-885396320
That won't be deleted automatically. Once the retention time will expire
then subsequent clean file command will delete the directory.
![image](https://user-images.git
nihal0107 commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-885396320
That won't be deleted automatically. Once the retention time will expire the
subsequent clean file command will delete the directory.
![image](https://user-images.githubuserc
study-day opened a new issue #4181:
URL: https://github.com/apache/carbondata/issues/4181
https://cwiki.apache.org/confluence/display/CARBONDATA/Apache+CarbonData+2.1.1+Release
it does not support merge into ,please modify the document.
```
hive --version Hive 1.2.1000.2.6.5.0-2
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-885358237
Thank you very much for your help, let me know more about carbondata!
I have a questions .
in the https://github.com/apache/carbondata/blob/master/docs/clean-files.md
`
nihal0107 commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-883917592
As I can see in the output of the show segment => segment status with id 0
and 1 is marked for delete. It means these segments are not valid. You can
execute once `clean file co
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-883819350
hi, when the spark beeline ,it also happen error
```
[hdfs@hadoop-node-1 spark-2.3.4-bin-hadoop2.7]$ bin/beeline
Beeline version 1.2.1.spark2 by Apache Hive
beeline>
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-883815940
1. it is hive beeline
```
0: jdbc:hive2://hadoop-node-1:1> show create table test_table;
+-
nihal0107 edited a comment on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-883222462
Can you please share the details of where you are running these queries?
Either it is hive-beeline or spark sql/beeline, etc. As these queries should
not fail. Because in
study-day commented on issue #4172:
URL: https://github.com/apache/carbondata/issues/4172#issuecomment-883224506
hi ,thank you for your suggestion。
you can try it in the hive client (tez engine) the error will happen .
--
This is an automated message from the Apache Git Service.
To
nihal0107 commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-883222462
Can you please share the details of where you are running these queries?
Either it is hive-beeline or spark sql/beeline, etc. As these queries should
not fail. Because in the ca
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-882976676
Hi, DELETE FROM default.test_table WHERE SEGMENT.ID IN (0,1); also
reported an error.
error info :
Error: org.apache.spark.sql.AnalysisException: cannot resolve '`SEGMENT
study-day commented on issue #4172:
URL: https://github.com/apache/carbondata/issues/4172#issuecomment-883224506
hi ,thank you for your suggestion。
you can try it in the hive client (tez engine) the error will happen .
--
This is an automated message from the Apache Git Service.
To
nihal0107 edited a comment on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-882401423
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To u
nihal0107 commented on issue #4172:
URL: https://github.com/apache/carbondata/issues/4172#issuecomment-882402060
If you are not sure about the issue then can you please close it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu
nihal0107 commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-882401423
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscr
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-882976676
Hi, DELETE FROM default.test_table WHERE SEGMENT.ID IN (0,1); also
reported an error.
error info :
Error: org.apache.spark.sql.AnalysisException: cannot resolve '`SEGMENT
study-day commented on issue #4172:
URL: https://github.com/apache/carbondata/issues/4172#issuecomment-883224506
hi ,thank you for your suggestion。
you can try it in the hive client (tez engine) the error will happen .
--
This is an automated message from the Apache Git Service.
To
study-day opened a new issue #4178:
URL: https://github.com/apache/carbondata/issues/4178
Support MERGE INTO SQL Syntax
CarbonData now supports MERGE INTO SQL syntax along with the API support.
This will help the users to write CDC job and merge job using SQL also now.
how to use
nihal0107 edited a comment on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-883222462
Can you please share the details of where you are running these queries?
Either it is hive-beeline or spark sql/beeline, etc. As these queries should
not fail. Because in
nihal0107 commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-883222462
Can you please share the details of where you are running these queries?
Either it is hive-beeline or spark sql/beeline, etc. As these queries should
not fail. Because in the ca
study-day commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-882976676
Hi, DELETE FROM default.test_table WHERE SEGMENT.ID IN (0,1); also
reported an error.
error info :
Error: org.apache.spark.sql.AnalysisException: cannot resolve '`SEGMENT
nihal0107 commented on issue #4172:
URL: https://github.com/apache/carbondata/issues/4172#issuecomment-882402060
If you are not sure about the issue then can you please close it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHu
nihal0107 edited a comment on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-882401423
Hi, please remove the keyword `table` from the query.
New query would be something like:
`DELETE FROM default.test_table WHERE SEGMENT.ID IN (0,1);`
--
This is an
nihal0107 commented on issue #4173:
URL: https://github.com/apache/carbondata/issues/4173#issuecomment-882401423
Hi, please remove the keyword `table` from the query.
New query would be something like:
`DELETE FROM TABLE default.test_table WHERE SEGMENT.ID IN (0,1);`
--
This is an a
study-day closed issue #4170:
URL: https://github.com/apache/carbondata/issues/4170
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: issues-unsubsc
study-day commented on issue #4172:
URL: https://github.com/apache/carbondata/issues/4172#issuecomment-878016650
I guess it has something to do with tez. But I don't know how to solve it, I
switched to spark sql.
--
This is an automated message from the Apache Git Service.
To respond to
study-day opened a new issue #4173:
URL: https://github.com/apache/carbondata/issues/4173
carbondata 2.1.1
DELETE FROM TABLE default.test_table WHERE SEGMENT.ID IN reported an
error in benline
```
0: jdbc:hive2://hadoop-node-1:10016> show segments for table test_table;
vikramahuja1001 commented on issue #4168:
URL: https://github.com/apache/carbondata/issues/4168#issuecomment-877129631
hi @LiuLarry , you can try using the Oracle Java as given in the [build
page](https://github.com/apache/carbondata/tree/master/build)
--
This is an automated message fro
ydvpankaj99 edited a comment on issue #4168:
URL: https://github.com/apache/carbondata/issues/4168#issuecomment-876491512
hi please use below maven command to compile with spark 3.1 :-
clean install -U -Pbuild-with-format scalastyle:check checkstyle:check
-Pspark-3.1 -Dspark.version
ydvpankaj99 commented on issue #4168:
URL: https://github.com/apache/carbondata/issues/4168#issuecomment-876491512
hi please use below maven command to compile with spark 3.1 :-
clean install -U -Pbuild-with-format scalastyle:check checkstyle:check
-Pspark-3.1 -Dspark.version=3.1.1
nihal0107 commented on issue #4172:
URL: https://github.com/apache/carbondata/issues/4172#issuecomment-876470349
Hi, can you please provide the detailed query which you are trying to
execute:
Like either you are facing the issue at the time of creating table or insert
query.
Alth
brijoobopanna commented on issue #4170:
URL: https://github.com/apache/carbondata/issues/4170#issuecomment-876457406
please share the issue you faced
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
study-day opened a new issue #4172:
URL: https://github.com/apache/carbondata/issues/4172
Data can only be read through hive. If you use hive to write input, tez will
report an error.
```
Caused by: java.lang.RuntimeException: Failed to load plan:
hdfs://hadoop-node-1:8020/tmp/hive/h
study-day opened a new issue #4170:
URL: https://github.com/apache/carbondata/issues/4170
Operate according to the official document Quick Start, no success, the
document omits too many details, which is unfriendly。
https://carbondata.apache.org/quick-start-guide.html
--
This is a
study-day opened a new issue #4169:
URL: https://github.com/apache/carbondata/issues/4169
spark version 2.3.4 use ANTLR Tool version 4.7 ,but carbondata use
ANTLR 4.8
An error occurred in the spark sql , please use version 4.7
error log
ANTLR Tool version 4.7 used for code ge
LiuLarry opened a new issue #4168:
URL: https://github.com/apache/carbondata/issues/4168
use the follow command to build carbondata, got error message as attachment
show.
mvn -DskipTests -Dfindbugs.skip=true -Dcheckstyle.skip=true -Pspark-3.1
-Pbuild-with-format clean package instal
QiangCai commented on issue #4146:
URL: https://github.com/apache/carbondata/issues/4146#issuecomment-869277674
I suggest using SDK to write data into the stage area and using insert into
the stage to add it to the table.
https://github.com/apache/carbondata/blob/master/docs/flink-
QiangCai commented on issue #4160:
URL: https://github.com/apache/carbondata/issues/4160#issuecomment-869274861
It only works for the local_sort loading.
It can help to avoid data shuffle during executors.
--
This is an automated message from the Apache Git Service.
To respond to the
01lin opened a new issue #4160:
URL: https://github.com/apache/carbondata/issues/4160
In case of insert into or load data, the total number of tasks in the stage
is almost equal to the number of hosts, and in general it is much smaller than
the available executors. The low parallelism of t
BestP2P commented on issue #4144:
URL: https://github.com/apache/carbondata/issues/4144#issuecomment-853599764
thank you very much from china!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
BestP2P closed issue #4144:
URL: https://github.com/apache/carbondata/issues/4144
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please c
nihal0107 commented on issue #4144:
URL: https://github.com/apache/carbondata/issues/4144#issuecomment-853580601
Hi,
Yes, carbon-SDK supports hdfs configuration.
When building a carbon writer, you can use API named
`withHadoopConf(Configuration conf)` to pass the detailed config
BestP2P opened a new issue #4146:
URL: https://github.com/apache/carbondata/issues/4146
if I use hdfs system, and the using sdk program running on multi hosts, how
can i let them write to one hdfs file?
thank you
--
This is an automated message from the Apache Git Service.
To respond
BestP2P closed issue #4144:
URL: https://github.com/apache/carbondata/issues/4144
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please c
BestP2P commented on issue #4144:
URL: https://github.com/apache/carbondata/issues/4144#issuecomment-853599764
thank you very much from china!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the
nihal0107 commented on issue #4144:
URL: https://github.com/apache/carbondata/issues/4144#issuecomment-853580601
Hi,
Yes, carbon-SDK supports hdfs configuration.
When building a carbon writer, you can use API named
`withHadoopConf(Configuration conf)` to pass the detailed config
BestP2P opened a new issue #4144:
URL: https://github.com/apache/carbondata/issues/4144
Writing carbondata files from other application which does not use Spark,it
is support hdfs configure? how can i write the carbondata to hdfs system?
--
This is an automated message from the Apache Gi
chenliang613 opened a new issue #4114:
URL: https://github.com/apache/carbondata/issues/4114
Join community by emailing to dev-subscr...@carbondata.apache.org, then you
can discuss issues by emailing to d...@carbondata.apache.org or visit
http://apache-carbondata-mailing-list-archive.11305
CarbonDataQA2 commented on pull request #4110:
URL: https://github.com/apache/carbondata/pull/4110#issuecomment-808462627
Build Failed with Spark 2.3.4, Please check CI
http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5098/
--
This is an automated message from the Apache Git
CarbonDataQA2 commented on pull request #4110:
URL: https://github.com/apache/carbondata/pull/4110#issuecomment-808462123
Build Failed with Spark 2.4.5, Please check CI
http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3347/
--
This is an automated message from the Apache
VenuReddy2103 commented on pull request #4110:
URL: https://github.com/apache/carbondata/pull/4110#issuecomment-808456818
retest this please
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the s
CarbonDataQA2 commented on pull request #4110:
URL: https://github.com/apache/carbondata/pull/4110#issuecomment-808449842
Build Failed with Spark 2.3.4, Please check CI
http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5097/
--
This is an automated message from the Apache Git
CarbonDataQA2 commented on pull request #4110:
URL: https://github.com/apache/carbondata/pull/4110#issuecomment-808449555
Build Failed with Spark 2.4.5, Please check CI
http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3346/
--
This is an automated message from the Apache
asfgit closed pull request #4109:
URL: https://github.com/apache/carbondata/pull/4109
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, plea
ajantha-bhat commented on pull request #4109:
URL: https://github.com/apache/carbondata/pull/4109#issuecomment-807946403
LGTM. Just done high level review.
Merging PR for RC2 cut.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to G
CarbonDataQA2 commented on pull request #4109:
URL: https://github.com/apache/carbondata/pull/4109#issuecomment-807077748
Build Success with Spark 2.4.5, Please check CI
http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3343/
--
This is an automated message from the Apache
CarbonDataQA2 commented on pull request #4109:
URL: https://github.com/apache/carbondata/pull/4109#issuecomment-807077185
Build Success with Spark 2.3.4, Please check CI
http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5095/
--
This is an automated message from the Apache Git
kunal642 commented on pull request #4109:
URL: https://github.com/apache/carbondata/pull/4109#issuecomment-806920029
retest this please
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
asfgit closed pull request #4101:
URL: https://github.com/apache/carbondata/pull/4101
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, plea
kunal642 commented on pull request #4101:
URL: https://github.com/apache/carbondata/pull/4101#issuecomment-806904148
LGTM
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
F
CarbonDataQA2 commented on pull request #4100:
URL: https://github.com/apache/carbondata/pull/4100#issuecomment-806464050
Build Success with Spark 2.3.4, Please check CI
http://121.244.95.60:12602/job/ApacheCarbonPRBuilder2.3/5094/
--
This is an automated message from the Apache Git
CarbonDataQA2 commented on pull request #4100:
URL: https://github.com/apache/carbondata/pull/4100#issuecomment-806463869
Build Success with Spark 2.4.5, Please check CI
http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3342/
--
This is an automated message from the Apache
QiangCai commented on pull request #4100:
URL: https://github.com/apache/carbondata/pull/4100#issuecomment-806410550
retest this please
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specif
CarbonDataQA2 commented on pull request #3988:
URL: https://github.com/apache/carbondata/pull/3988#issuecomment-805753259
Build Success with Spark 2.4.5, Please check CI
http://121.244.95.60:12602/job/ApacheCarbon_PR_Builder_2.4.5/3341/
--
This is an automated message from the Apache
1 - 100 of 30232 matches
Mail list logo