snleee commented on a change in pull request #5934:
URL: https://github.com/apache/incubator-pinot/pull/5934#discussion_r485969046



##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/segment/processing/collector/RollupCollector.java
##########
@@ -0,0 +1,159 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.segment.processing.collector;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import org.apache.pinot.spi.data.FieldSpec;
+import org.apache.pinot.spi.data.MetricFieldSpec;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.GenericRow;
+
+
+/**
+ * A Collector that rolls up the incoming records on unique dimensions + time 
columns, based on provided aggregation types for metrics.
+ * By default will use the SUM aggregation on metrics.
+ */
+public class RollupCollector implements Collector {
+
+  private final Map<Record, GenericRow> _collection = new HashMap<>();
+  private Iterator<GenericRow> _iterator;
+  private GenericRowSorter _sorter;
+
+  private final int _keySize;
+  private final int _valueSize;
+  private final String[] _keyColumns;
+  private final String[] _valueColumns;
+  private final ValueAggregator[] _valueAggregators;
+  private final MetricFieldSpec[] _metricFieldSpecs;
+
+  public RollupCollector(CollectorConfig collectorConfig, Schema schema) {
+    _keySize = schema.getPhysicalColumnNames().size() - 
schema.getMetricNames().size();
+    _valueSize = schema.getMetricNames().size();
+    _keyColumns = new String[_keySize];
+    _valueColumns = new String[_valueSize];
+    _valueAggregators = new ValueAggregator[_valueSize];
+    _metricFieldSpecs = new MetricFieldSpec[_valueSize];
+
+    Map<String, ValueAggregatorFactory.ValueAggregatorType> aggregatorTypeMap 
= collectorConfig.getAggregatorTypeMap();
+    if (aggregatorTypeMap == null) {
+      aggregatorTypeMap = Collections.emptyMap();
+    }
+    int valIdx = 0;
+    int keyIdx = 0;
+    for (FieldSpec fieldSpec : schema.getAllFieldSpecs()) {
+      if (!fieldSpec.isVirtualColumn()) {
+        String name = fieldSpec.getName();
+        if (fieldSpec.getFieldType().equals(FieldSpec.FieldType.METRIC)) {
+          _metricFieldSpecs[valIdx] = (MetricFieldSpec) fieldSpec;
+          _valueColumns[valIdx] = name;
+          _valueAggregators[valIdx] = 
ValueAggregatorFactory.getValueAggregator(
+              aggregatorTypeMap.getOrDefault(name, 
ValueAggregatorFactory.ValueAggregatorType.SUM).toString());
+          valIdx++;
+        } else {
+          _keyColumns[keyIdx++] = name;
+        }
+      }
+    }
+
+    List<String> sortOrder = collectorConfig.getSortOrder();
+    if (sortOrder.size() > 0) {
+      _sorter = new GenericRowSorter(sortOrder, schema);
+    }
+  }
+
+  /**
+   * If a row already exists in the collection (based on dimension + time 
columns), rollup the metric values, else add the row
+   */
+  @Override
+  public void collect(GenericRow genericRow) {

Review comment:
       I guess that we basically keep the entire data for a segment on JVM 
heap? 
   
   In the future, we may need to add the off-heap or file-based collector to 
avoid OOM error when reading large segments. (e.g. 1-2gb Pinot segment can be 
extremely large in row format)
   
   Another way to save memory is to sort the data on all dimensions and scan at 
once for aggregation (but this paying a large cost for cases when the data 
doesn't need to be sorted)

##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/segment/processing/partitioner/TableConfigPartitioner.java
##########
@@ -0,0 +1,45 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.segment.processing.partitioner;
+
+import org.apache.pinot.core.data.partition.PartitionFunction;
+import org.apache.pinot.core.data.partition.PartitionFunctionFactory;
+import org.apache.pinot.spi.config.table.ColumnPartitionConfig;
+import org.apache.pinot.spi.data.readers.GenericRow;
+
+
+/**
+ * Partitioner which computes partition values based on the 
ColumnPartitionConfig from the table config
+ */
+public class TableConfigPartitioner implements Partitioner {

Review comment:
       What if I need to align data on time while the table is custom 
partitioned? (just trying to brainstorm how we will extend the current 
partitioner to support this)
   
   Then, we can probably add the new partitioner that combines the value from 
`TableConfigPartitioner` and `TransformationPartitioner`?
   
   
   e.g.  partition on memberId using murmur, need to enable segment merge so 
the data needs to be time aligned.
   
   1. Use table config partitioner to get the partition id based on murmur on 
memberId -> Let's day `2`
   2. Use time align partitioner -> Let's say `2020/12/12`
   
   Combine 1&2 -> `2020/12/12-2` <- example of partitionId
   
   We can do something like the above?

##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/segment/processing/framework/SegmentProcessorConfig.java
##########
@@ -0,0 +1,181 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.segment.processing.framework;
+
+import com.google.common.base.Preconditions;
+import org.apache.pinot.core.segment.processing.collector.CollectorConfig;
+import org.apache.pinot.core.segment.processing.filter.RecordFilterConfig;
+import org.apache.pinot.core.segment.processing.partitioner.PartitioningConfig;
+import 
org.apache.pinot.core.segment.processing.transformer.RecordTransformerConfig;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.data.Schema;
+
+
+/**
+ * Config for configuring the phases of {@link SegmentProcessorFramework}
+ */
+public class SegmentProcessorConfig {

Review comment:
       One requirement for `SegmentMergeRollup` is to be able to put the custom 
name for the segment name (or at least need to put the prefix and the sequenced 
`merged_XXX_0...M`
   
   Where do you think it's the best place to configure those?

##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/segment/processing/partitioner/ColumnValuePartitioner.java
##########
@@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.segment.processing.partitioner;
+
+import org.apache.pinot.spi.data.readers.GenericRow;
+
+
+/**
+ * Partitioner which extracts a column value as the partition
+ */
+public class ColumnValuePartitioner implements Partitioner {
+
+  private final String _columnName;
+
+  public ColumnValuePartitioner(String columnName) {
+    _columnName = columnName;
+  }
+
+  @Override
+  public String getPartition(GenericRow genericRow) {

Review comment:
       Is this intended for supporting time alignment?
   
   What if the time column granularity is in seconds/hours while push frequency 
is DAY?
   
   In that case, we may need to use `TransformationPartitioner`?

##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/segment/processing/partitioner/NoOpPartitioner.java
##########
@@ -0,0 +1,32 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.segment.processing.partitioner;
+
+import org.apache.pinot.spi.data.readers.GenericRow;
+
+
+/**
+ * Partitioner implementation which always returns constant partition value "0"
+ */
+public class NoOpPartitioner implements Partitioner {
+  @Override
+  public String getPartition(GenericRow genericRow) {
+    return "0";

Review comment:
       No-op partitioner means that we always create a single output file?

##########
File path: 
pinot-core/src/main/java/org/apache/pinot/core/segment/processing/framework/SegmentProcessorFramework.java
##########
@@ -0,0 +1,194 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.segment.processing.framework;
+
+import com.google.common.base.Preconditions;
+import java.io.File;
+import java.util.Arrays;
+import org.apache.commons.io.FileUtils;
+import org.apache.pinot.common.utils.TarGzCompressionUtils;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import 
org.apache.pinot.core.segment.creator.impl.SegmentIndexCreationDriverImpl;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.data.Schema;
+import org.apache.pinot.spi.data.readers.FileFormat;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * A framework to process "m" given segments and convert them into "n" segments
+ * The phases of the Segment Processor are
+ * 1. Map - record transformation, partitioning, partition filtering
+ * 2. Reduce - rollup, concat, split etc
+ * 3. Segment generation
+ *
+ * This will typically be used by minion tasks, which want to perform some 
processing on segments
+ * (eg task which merges segments, tasks which aligns segments per time 
boundaries etc)
+ */
+public class SegmentProcessorFramework {
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(SegmentProcessorFramework.class);
+
+  private final File _inputSegmentsDir;
+  private final File _outputSegmentsDir;
+  private final SegmentProcessorConfig _segmentProcessorConfig;
+
+  private final Schema _pinotSchema;
+  private final TableConfig _tableConfig;
+
+  private final File _baseDir;
+  private final File _mapperInputDir;
+  private final File _mapperOutputDir;
+  private final File _reducerOutputDir;
+
+  /**
+   * Initializes the Segment Processor framework with input segments, output 
path and processing config
+   * @param inputSegmentsDir directory containing the input segments. These 
can be tarred or untarred.
+   * @param segmentProcessorConfig config for segment processing
+   * @param outputSegmentsDir directory for placing the resulting segments. 
This should already exist.
+   */
+  public SegmentProcessorFramework(File inputSegmentsDir, 
SegmentProcessorConfig segmentProcessorConfig,
+      File outputSegmentsDir) {
+
+    LOGGER.info(
+        "Initializing SegmentProcessorFramework with input segments dir: {}, 
output segments dir: {} and segment processor config: {}",
+        inputSegmentsDir.getAbsolutePath(), 
outputSegmentsDir.getAbsolutePath(), segmentProcessorConfig.toString());
+
+    _inputSegmentsDir = inputSegmentsDir;
+    Preconditions.checkState(_inputSegmentsDir.exists() && 
_inputSegmentsDir.isDirectory(),
+        "Input path: %s must be a directory with Pinot segments", 
_inputSegmentsDir.getAbsolutePath());
+    _outputSegmentsDir = outputSegmentsDir;
+    Preconditions.checkState(
+        _outputSegmentsDir.exists() && _outputSegmentsDir.isDirectory() && 
(_outputSegmentsDir.list().length == 0),
+        "Must provide existing empty output directory: %s", 
_outputSegmentsDir.getAbsolutePath());
+
+    _segmentProcessorConfig = segmentProcessorConfig;
+    _pinotSchema = segmentProcessorConfig.getSchema();
+    _tableConfig = segmentProcessorConfig.getTableConfig();
+
+    _baseDir = new File(FileUtils.getTempDirectory(), "segment_processor_" + 
System.currentTimeMillis());
+    FileUtils.deleteQuietly(_baseDir);
+    Preconditions.checkState(_baseDir.mkdirs(), "Failed to create base 
directory: %s for SegmentProcessor", _baseDir);
+    _mapperInputDir = new File(_baseDir, "mapper_input");
+    Preconditions
+        .checkState(_mapperInputDir.mkdirs(), "Failed to create mapper input 
directory: %s for SegmentProcessor",
+            _mapperInputDir);
+    _mapperOutputDir = new File(_baseDir, "mapper_output");
+    Preconditions
+        .checkState(_mapperOutputDir.mkdirs(), "Failed to create mapper output 
directory: %s for SegmentProcessor",
+            _mapperOutputDir);
+    _reducerOutputDir = new File(_baseDir, "reducer_output");
+    Preconditions
+        .checkState(_reducerOutputDir.mkdirs(), "Failed to create reducer 
output directory: %s for SegmentProcessor",
+            _reducerOutputDir);
+  }
+
+  /**
+   * Processes segments from the input directory as per the provided configs, 
then puts resulting segments into the output directory
+   */
+  public void processSegments()
+      throws Exception {
+
+    // Check for input segments
+    File[] segmentFiles = _inputSegmentsDir.listFiles();
+    if (segmentFiles.length == 0) {
+      throw new IllegalStateException("No segments found in input dir: " + 
_inputSegmentsDir.getAbsolutePath()
+          + ". Exiting SegmentProcessorFramework.");
+    }
+
+    // Mapper phase.
+    LOGGER.info("Beginning mapper phase. Processing segments: {}", 
Arrays.toString(_inputSegmentsDir.list()));
+    for (File segment : segmentFiles) {
+
+      String fileName = segment.getName();
+      File mapperInput = segment;
+
+      // Untar the segments if needed
+      if (!segment.isDirectory()) {
+        if (fileName.endsWith(".tar.gz") || fileName.endsWith(".tgz")) {
+          mapperInput = TarGzCompressionUtils.untar(segment, 
_mapperInputDir).get(0);
+        } else {
+          throw new IllegalStateException("Unsupported segment format: " + 
segment.getAbsolutePath());
+        }
+      }
+
+      // Set mapperId as the name of the segment
+      SegmentMapperConfig mapperConfig =
+          new SegmentMapperConfig(_pinotSchema, 
_segmentProcessorConfig.getRecordTransformerConfig(),
+              _segmentProcessorConfig.getRecordFilterConfig(), 
_segmentProcessorConfig.getPartitioningConfig());
+      SegmentMapper mapper = new SegmentMapper(mapperInput.getName(), 
mapperInput, mapperConfig, _mapperOutputDir);
+      mapper.map();
+      mapper.cleanup();
+    }
+
+    // Check for mapper output files
+    File[] mapperOutputFiles = _mapperOutputDir.listFiles();
+    if (mapperOutputFiles.length == 0) {
+      throw new IllegalStateException("No files found in mapper output 
directory: " + _mapperOutputDir.getAbsolutePath()
+          + ". Exiting SegmentProcessorFramework.");
+    }
+
+    // Reducer phase.
+    LOGGER.info("Beginning reducer phase. Processing files: {}", 
Arrays.toString(_mapperOutputDir.list()));
+    // Mapper output directory has 1 directory per partition, named after the 
partition. Each directory contains 1 or more avro files.
+    for (File partDir : mapperOutputFiles) {
+
+      // Set partition as reducerId
+      SegmentReducerConfig reducerConfig =
+          new SegmentReducerConfig(_pinotSchema, 
_segmentProcessorConfig.getCollectorConfig(),
+              
_segmentProcessorConfig.getSegmentConfig().getMaxNumRecordsPerSegment());
+      SegmentReducer reducer = new SegmentReducer(partDir.getName(), partDir, 
reducerConfig, _reducerOutputDir);
+      reducer.reduce();
+      reducer.cleanup();
+    }
+
+    // Check for reducer output files
+    File[] reducerOutputFiles = _reducerOutputDir.listFiles();
+    if (reducerOutputFiles.length == 0) {
+      throw new IllegalStateException(
+          "No files found in reducer output directory: " + 
_reducerOutputDir.getAbsolutePath()
+              + ". Exiting SegmentProcessorFramework.");
+    }
+
+    // Segment generation phase.
+    LOGGER.info("Beginning segment generation phase. Processing files: {}", 
Arrays.toString(_reducerOutputDir.list()));
+    // Reducer output directory will have 1 or more avro files
+    for (File resultFile : reducerOutputFiles) {

Review comment:
       Did you check the output segment names when the output is more than 1 
files?
   
   It's possible that the final segments may have the same segment name. (e.g. 
`<tablename>_<start>_<end>`)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to