Integration with Spark 2.x is a great feature for Carbondata as Spark 2.x is 
getting the momentum gradually. This is a big effort ahead and let's take into 
consideration of all the complexity involved due to dramatic API level changeļ¼Œ 
realizing it in phases is a good idea.



-----Original Message-----
From: Jacky Li [] 
Sent: Saturday, November 26, 2016 10:08 AM
Subject: [Feature Proposal] Spark 2 integration with CarbonData

Hi all,

Currently CarbonData only works with spark1.5 and spark1.6, as Apache Spark
community is moving to 2.1, more and more user will deploy spark 2.x in
production environment. In order to make CarbonData even more popular, I
think now it is good time to start considering spark2.x integration with

Moreover, we can take this as a chance to refactory CarbonData to make it
both easier to use and higher performance.

Instead of using CarbonContext, in spark2 integration, user should able to
1. use native SparkSession in the spark application to create and query
table backed by CarbonData files with full feature support, including index
and late decode optimization.

2. use CarbonData's API and tool to acomplish carbon specific tasks, like
compaction, delete segment, etc.

1. deep integration with Datasource API and leveraging spark2's whole stage
codegen feature.

2. provide implementation of vectorized record reader, to improve scanning

Since spark2 changes a lot comparing to spark 1.6, it may take some time to
complete all these features. With the help of contributors and committers, I
hope we can have basic features working in next CarbonData release. 

What do you think about this idea? All kinds of contribution and suggestions
are welcomed.

Jacky Li

View this message in context:
Sent from the Apache CarbonData Mailing List archive mailing list archive at

Reply via email to