[ 
https://issues.apache.org/jira/browse/CARBONDATA-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15568888#comment-15568888
 ] 

ASF GitHub Bot commented on CARBONDATA-297:
-------------------------------------------

Github user jackylk commented on a diff in the pull request:

    https://github.com/apache/incubator-carbondata/pull/229#discussion_r83018479
  
    --- Diff: 
processing/src/main/java/org/apache/carbondata/processing/newflow/DataLoadProcessorStep.java
 ---
    @@ -0,0 +1,40 @@
    +package org.apache.carbondata.processing.newflow;
    +
    +import java.util.Iterator;
    +
    +import 
org.apache.carbondata.processing.newflow.exception.CarbonDataLoadingException;
    +
    +/**
    + * This base interface for data loading. It can do transformation jobs as 
per the implementation.
    + *
    + */
    +public interface DataLoadProcessorStep {
    +
    +  /**
    +   * The output meta for this step. The data returns from this step is as 
per this meta.
    +   * @return
    +   */
    +  DataField[] getOutput();
    +
    +  /**
    +   * Intialization process for this step.
    +   * @param configuration
    +   * @param child
    +   * @throws CarbonDataLoadingException
    +   */
    +  void intialize(CarbonDataLoadConfiguration configuration, 
DataLoadProcessorStep child) throws
    +      CarbonDataLoadingException;
    +
    +  /**
    +   * Tranform the data as per the implemetation.
    +   * @return Iterator of data
    +   * @throws CarbonDataLoadingException
    +   */
    +  Iterator<Object[]> execute() throws CarbonDataLoadingException;
    --- End diff --
    
    I think `execute()` is called for every parallel unit of the input, right? 
For example, when using spark to load from dataframe, `execute()` is called for 
every spark partition (execute one task for one partition). When loading from 
CSV HDFS file, `execute()` is called for every HDFS block. So I do not think 
returning array of iterator is required.
    
    The loading process of carbon in every executor, some of the step can be 
parallelized, but sort step need to be synchronized (potential bottle net), 
since we need datanode-scope sorting. Am I correct?


> 2. Add interfaces for data loading.
> -----------------------------------
>
>                 Key: CARBONDATA-297
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-297
>             Project: CarbonData
>          Issue Type: Sub-task
>            Reporter: Ravindra Pesala
>            Assignee: Ravindra Pesala
>             Fix For: 0.2.0-incubating
>
>
> Add the major interface classes for data loading so that the following jiras 
> can use this interfaces to implement it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to