> 在 2016年10月4日,上午8:01,Jihong Ma <jihong...@huawei.com> 写道:
> 
> It is a great idea to open the door for more flexible/scalable way of 
> accessing index to help with query processing.  if our goals are as 
> following: 
> 
>> Goal 1: User can choose the place to store Index data, it can be stored in 
>> processing framework's memory space (like in spark driver memory) or in 
> another service outside of the processing framework (like using a independent 
> database service)
> 
>  the current approach of leveraging index for filter predicate/column 
> projection pushdown through index is invisible to optimizer, since index can 
> live outside of the current processing framework and physically separated 
> from the base data,  I am inclining with exposing index scan visible to 
> optimizer, and allow optimizer to weight in and decide which index to pick,  
> considering we would also like to extend the support beyond b+ tree as 
> described in Goal 2).  
> 

I think the bottom line is that CarbonData index should be easy enough to 
integrate into different processing frameworks, which I think InputFormat is 
the best choice as of now. Integration in this level, enable all processing 
framework can leverage CarbonData’s index capability without any plugin code in 
the upper framework.

The next level of integration is in optimizer level, in this case, unless the 
optimizer is capable of evaluating cost of using different indices, exposing 
the index as API to optimizer is meaningful. I think we can always to do that 
later on, if we have added more indices to CarbonData and come up with a neat 
API for optimizer to use.

> 
>> Goal 2: Developer can add more index of his choice to CarbonData files.
> Besides B+ tree on multi-dimensional key which current CarbonData supports, 
> developers are free to add other indexing technology to make certain workload 
> faster. These new indices should be added in a pluggable way.
> 
>  Great to make it easily extensible, we can define a clean abstract API to 
> unify the interface (build/insert/delete/scan..), refer to PostgreSQL on this 
> matter(thanks Qingqing for sharing the link):
>  https://www.postgresql.org/docs/8.4/static/indexam.html.
> 

I think you are right, we should have a clean Index API, currently I am 
thinking of read and load API only: 
https://github.com/apache/incubator-carbondata/pull/208/files#diff-ec23f9a4b91d70309522eef78689be1c
 
<https://github.com/apache/incubator-carbondata/pull/208/files#diff-ec23f9a4b91d70309522eef78689be1c>

For the traditional database like PostgreSQL, since they need to deal with 
transactions so their index API need to consider transaction as well, while in 
CarbonData,  I am not sure how far we can go and how much mutability we can add 
to Index API. Currently, its model is using Segment concept to do incremental 
load, so Index is immutable. If data is append only, I think read and load of 
index is enough. But if we introduce update into CarbonData, then index should 
be updatable as well, I think it is better to consider this together in data 
update feature if there are any in the future?

> 
> Jenny
> 
> -----Original Message-----
> From: Jacky Li [mailto:jacky.li...@qq.com] 
> Sent: Sunday, October 02, 2016 10:25 PM
> To: dev@carbondata.incubator.apache.org
> Subject: Re: Abstracting CarbonData's Index Interface
> 
> After a second thought regarding the index part, another option is that to 
> have a very simple Segment definition which can only list all files it has or 
> listFile taking the QueryModel as input, implementation of Segment can be 
> IndexSegment, MultiIndexSegment or StreamingSegment (no index). In future, 
> developer is free to create MultiIndexSegment to select index internally. Is 
> this option better?
> 
> Regards,
> Jacky
> 
>> 在 2016年10月3日,上午11:00,Jacky Li <jacky.li...@qq.com> 写道:
>> 
>> I am currently thinking these abstractions:
>> 
>> - A SegmentManager is the global manager of all segments for one table. It 
>> can be used to get all segments and manage the segment while loading and 
>> compaction.
>> - A CarbonInputFormat will take the input of table path, so means it 
>> represent the whole table contain all segments.  When getSplit is called, it 
>> will get all segments by calling SegmentManager interface.
>> - Each Segment contains a list of Index, and an IndexSelector. While 
>> currently CarbonData only has MDK index, developer can create multiple 
>> indices for each segment in the future.
>> - An Index is an interface to filtering on block/blocklet, and provide this 
>> functionality only.  Implementation should hide all complexity like deciding 
>> where to store the index.
>> - An IndexSelector is an interface to choose which index to use based on 
>> query predicates. Default implementation is to choose the first index. An 
>> implementation of IndexChooser can also decide not to use index at all.
>> - A Distributor is used to map the filtered block/blocklet to InputSplits. 
>> Implementation can take number of node, parallelism into consideration. It 
>> can also decide to distribute tasks based on block or blocklet.
>> 
>> So the main concepts are SegmentManager, Segment, Index, IndexSelector, 
>> InputFormat/OutputFormat, Distributor.
>> 
>> There will be a default implementation of CarbonInputFormat whose getSplit 
>> will do the following: 
>> 1. gat all segments by calling SegmentManager
>> 2. for each segment, choose the index to use by IndexSelector
>> 3. invoke the selected Index to filter out block/blocklet (since these are 
>> two concept, maybe a parent class need to be created to encapsulate them)
>> 4. distribute the filtered block/blocklet to InputSplits by Distributor.
>> 
>> Regarding the input to the Index.filter interface, I have not decided to use 
>> the existing QueryModel or create a new cleaner QueryModel interface. If new 
>> QueryModel is desired, it should only contain filter predicate and project 
>> columns, so it is much simpler than current QueryModel. But I see current 
>> QueryModel is used in Compaction also, so I think it is better to do this 
>> clean up later?
>> 
>> 
>> Does this look fine to you? Any suggestion is welcome.
>> 
>> Regards,
>> Jacky
>> 
>> 
>>> 在 2016年10月3日,上午2:18,Venkata Gollamudi <g.ramana...@gmail.com> 写道:
>>> 
>>> Yes Jacky, interfaces needs to be revisited.
>>> For Goal 1 and Goal 2: abstraction required for both Index and Index store.
>>> Also multi-column index(composite index) needs to be considered.
>>> 
>>> Regards,
>>> Ramana
>>> 
>>> On Sat, Oct 1, 2016 at 11:01 AM, Jacky Li <jacky.li...@qq.com> wrote:
>>> 
>>>> Hi community,
>>>> 
>>>>  Currently CarbonData have builtin index support which is one of the key
>>>> strength of CarbonData. Using index, CarbonData can do very fast filter
>>>> query by filtering on block and blocklet level. However, it also introduces
>>>> memory consumption of the index tree and impact first query time because
>>>> the
>>>> process of loading of index from file footer into memory. On the other
>>>> side,
>>>> in a multi-tennant environment, multiple applications may access data files
>>>> simultaneously, which again exacerbate this resource consumption issue.
>>>>  So, I want to propose and discuss a solution with you to solve this
>>>> problem and make an abstraction of interface for CarbonData's future
>>>> evolvement.
>>>>  I am thinking the final result of this work should achieve at least two
>>>> goals:
>>>> 
>>>> Goal 1: User can choose the place to store Index data, it can be stored in
>>>> processing framework's memory space (like in spark driver memory) or in
>>>> another service outside of the processing framework (like using a
>>>> independent database service)
>>>> 
>>>> Goal 2: Developer can add more index of his choice to CarbonData files.
>>>> Besides B+ tree on multi-dimensional key which current CarbonData supports,
>>>> developers are free to add other indexing technology to make certain
>>>> workload faster. These new indices should be added in a pluggable way.
>>>> 
>>>>   In order to achieve these goals, an abstraction need to be created for
>>>> CarbonData project, including:
>>>> 
>>>> - Segment: each segment is presenting one load of data, and tie with some
>>>> indices created with this load
>>>> 
>>>> - Index: index is created when this segment is created, and is leveraged
>>>> when CarbonInputFormat's getSplit is called, to filter out the required
>>>> blocks or even blocklets.
>>>> 
>>>> - CarbonInputFormat: There maybe n number of indices created for data file,
>>>> when querying these data files, InputFormat should know how to access these
>>>> indices, and initialize or loading these index if required.
>>>> 
>>>>  Obviously, this work should be separated into different tasks and
>>>> implemented gradually. But first of all, let's discuss on the goal and the
>>>> proposed approach. What is your idea?
>>>> 
>>>> Regards,
>>>> Jacky
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> View this message in context: http://apache-carbondata-
>>>> mailing-list-archive.1130556.n5.nabble.com/Abstracting-
>>>> CarbonData-s-Index-Interface-tp1587.html
>>>> Sent from the Apache CarbonData Mailing List archive mailing list archive
>>>> at Nabble.com.
>>>> 
>> 
> 

Reply via email to