The key point is the version of spark and carbondata should match.
Regards.
Chenerlu.
--
View this message in context:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/problem-with-branch-1-1-tp16004p18107.html
Sent from the Apache CarbonData Dev Mailing List archive
I think it is ok to support spark1.5 without IUD currently.
If users upgrade spark version to spark2.1 or spark2.2 in the future, We can
remove this if less user use spark1.5.
Is there any special reason which lead to this remove ?
Regards.
Chenerlu.
--
View this message in context:
+1
Regards.
Chenerlu.
--
View this message in context:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/VOTE-Apache-CarbonData-1-1-1-RC1-release-tp17531p17715.html
Sent from the Apache CarbonData Dev Mailing List archive mailing list archive
at Nabble.com.
Hi
OK, thanks very much.
If you find something wrong in carbondata, we can discuss here.
Regards.
Chenerlu.
--
View this message in context:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Difference-in-decimal-values-for-variance-in-Presto-tp16496p17178.html
Sent
Hi Jatin,
Agree with you.
Carbondata community need such useful tools to increase the quality of
document.
Thanks.
Regards.
Chenerlu.
--
View this message in context:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Integrate-Document-Checker-tp17125p17180.html
Sent
Hi Divya
Thanks for your suggestion.
Carbondata may support it in the near future.
If you want to contribute this feature, I think it will benefit community a
lot.
Regards.
Chenerlu.
--
View this message in context:
Agree with caolu, I think users may be confused by lots of format.
In the future, it will be better for carbon to unify the data format. The
unified format should compatible with previous format. If it is unavoidable
to give different format to support different use case to gain better
Hi, Ravindra.
users can learn how to use carbondata through QUICK START document.
users should know how it works and this script just simply steps to get a
existing CarbonSession.
This is carbon API usage, I think community will send much time on
maintenance this script
which will do more harm
Hi,
Please try mvn package -DskipTests -Pspark-2.1 -Dspark.version=2.1.0
-Phadoop-2.7.2 with hadoop2.7.2 and spark 2.
I have just tested, it ok to compile.
[INFO] Reactor Summary:
[INFO]
[INFO] Apache CarbonData :: Parent SUCCESS [ 1.657
s]
[INFO] Apache CarbonData ::
Hi, xuchuanyin
I think lots of failed test cases may caused by the reason that window path
is different from linux path.
I have tested in my MAC with local mode.
All test cases you mentioned are passed.
Regards.
Chenerlu.
--
View this message in context:
Thanks for correct my mistake.
Yes, Just carbon-spark-shell, I think carbon-spark-sql is more helpful than
carbon-spark-shell, because it providing a method to interact with
carbondata via sql command, rather than carbon api.
Based on what I mentioned above, I think carbondata can still keep
Hi
For Question one, I have raised discussion about carbon-spark-shell for
spark2.x in following link.
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/DISCUSSION-Whether-Carbondata-should-keep-carbon-spark-shell-script-td14077.html
Actually there is PR to fix
Hi
I think you can debug in windows by adding some debug parameter when start
spark-shell in linux.
This is what called remote debug.
I tried this method when I use windows, hope my idea can help you.
Regards.
Chenerlu.
--
View this message in context:
Hi, xm_zzc
I support carbondata 1.2.0 + spark2.1, because spark 2.2 may not be stable
yet if it has been just released.
Regards.
Chenerlu.
--
View this message in context:
Hi, Mic sun
Can you ping your error message directly ?
It seems I can't get access to your appendix.
Thanks in advance.
Regards.
Chenerlu.
--
View this message in context:
Hi
can you share me your test steps for reproducing this issue ?
I mean completed test steps.
Thanks.
Chenerlu.
--
View this message in context:
Hi community
Any comments on this topic ?
If others have no idea, I will raise a PR to remove this feature.
Regards
Chenerlu.
--
View this message in context:
Hi community,
Recently, I viewed the implementation of carbon-sql-shell and tried to
understand the function of this script.
This script just wrap some steps and provide existing CarbonContext or
CarbonSession for users to interact with Carbondata.
I hold my opinion that we can remove this
Hi, dev
Currently, I am thinking about the function of show segments. We can see
segments of carbon table by executing this command, but it can only return
segmentId, status, load start time and load end time, and all this
information is from tablestatus, which I think it may be not enough for
Yeah. agree with ravi.
We can keep both "Show segments" and "Show extended segment" .
@xuchuanyin, as i know currently the result of show segment is formatted.
Regards.
Chenerlu.
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Expect the conference to be held !!
Regards.
Chenerlu
--
View this message in context:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Apache-CarbonData-6th-meetup-in-Shanghai-on-2nd-Sep-2017-at-https-jinshuju-net-f-X8x5S9-from-timeline-tp20693p20731.html
Sent from the
1 Requirement
Currently, Users can specify sort column in table properties when create
table. And when load data, users can also specify sort scope in load
options.
In order to improve the ease of use for users, it will be better to specify
the sort related parameters all in create table
22 matches
Mail list logo