Github user jackylk commented on a diff in the pull request:
    --- Diff: 
    @@ -0,0 +1,40 @@
    +package org.apache.carbondata.processing.newflow;
    +import java.util.Iterator;
    + * This base interface for data loading. It can do transformation jobs as 
per the implementation.
    + *
    + */
    +public interface DataLoadProcessorStep {
    +  /**
    +   * The output meta for this step. The data returns from this step is as 
per this meta.
    +   * @return
    +   */
    +  DataField[] getOutput();
    +  /**
    +   * Intialization process for this step.
    +   * @param configuration
    +   * @param child
    +   * @throws CarbonDataLoadingException
    +   */
    +  void intialize(CarbonDataLoadConfiguration configuration, 
DataLoadProcessorStep child) throws
    +      CarbonDataLoadingException;
    +  /**
    +   * Tranform the data as per the implemetation.
    +   * @return Iterator of data
    +   * @throws CarbonDataLoadingException
    +   */
    +  Iterator<Object[]> execute() throws CarbonDataLoadingException;
    --- End diff --
    I think `execute()` is called for every parallel unit of the input, right? 
For example, when using spark to load from dataframe, `execute()` is called for 
every spark partition (execute one task for one partition). When loading from 
CSV HDFS file, `execute()` is called for every HDFS block. So I do not think 
returning array of iterator is required.
    The loading process of carbon in every executor, some of the step can be 
parallelized, but sort step need to be synchronized (potential bottle net), 
since we need datanode-scope sorting. Am I correct?

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

Reply via email to