[GitHub] [incubator-pinot] snleee commented on a change in pull request #3942: Set processingException when all queried segments cannot be acquired

2019-03-20 Thread GitBox
snleee commented on a change in pull request #3942: Set processingException 
when all queried segments cannot be acquired
URL: https://github.com/apache/incubator-pinot/pull/3942#discussion_r267609303
 
 

 ##
 File path: 
pinot-core/src/main/java/org/apache/pinot/core/data/manager/BaseTableDataManager.java
 ##
 @@ -156,6 +165,35 @@ public void removeSegment(@Nonnull String segmentName) {
 }
   }
 
+  /**
+   * Called when a segment is deleted. The actual handling of segment delete 
is outside of this method.
+   * This method provides book-keeping around deleted segments.
+   * @param segmentName name of the segment to track.
+   */
+  public void trackDeletedSegment(@Nonnull String segmentName) {
+// add segment to the cache
+_deletedSegmentsCache.put(segmentName, true);
+  }
+
+  /**
+   * Check if a segment is recently deleted.
+   *
+   * @param segmentName name of the segment to check.
+   * @return true if segment is in the cache, false otherwise
+   */
+  public boolean isRecentlyDeleted(@Nonnull String segmentName) {
+return _deletedSegmentsCache.getIfPresent(segmentName) != null;
+  }
+
+  /**
+   * Remove a segment from the deleted cache if it is being added back.
+   *
+   * @param segmentName name of the segment that needs to removed from the 
cache (if needed)
+   */
+  private void untrackIfDeleted(@Nonnull String segmentName) {
 
 Review comment:
   In my opinion, `untrackDeletedSegment()` seems to be better. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] ly923976094 edited a comment on issue #3978: The segments are stored in memory

2019-03-20 Thread GitBox
ly923976094 edited a comment on issue #3978: The segments are stored in memory
URL: 
https://github.com/apache/incubator-pinot/issues/3978#issuecomment-475092629
 
 
   I understanding is that when I change the table's loading mode, for example, 
from HEAP to MMAP, I need to disable the table first, and then enable the 
table, and all the segments are reloaded, the direct-allocated memory will be 
released and the segment will now be loaded in MMAP mode,the uploaded segments 
will be MMAP again


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] ly923976094 commented on issue #3978: The segments are stored in memory

2019-03-20 Thread GitBox
ly923976094 commented on issue #3978: The segments are stored in memory
URL: 
https://github.com/apache/incubator-pinot/issues/3978#issuecomment-475092629
 
 
   I understanding is that when I change the table's loading mode, for example, 
from HEAP to MMAP, I need to disable the table first, and then enable the 
table, and all the segments are reloaded, the direct-allocated memory will be 
released and the segment will now be loaded in MMAP mode.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] jihaozh merged pull request #3999: [TE] Aggregation function and double series aggregation mapping

2019-03-20 Thread GitBox
jihaozh merged pull request #3999: [TE] Aggregation function and double series 
aggregation mapping
URL: https://github.com/apache/incubator-pinot/pull/3999
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: [TE] Aggregation function and double series aggregation mapping (#3999)

2019-03-20 Thread jihao
This is an automated email from the ASF dual-hosted git repository.

jihao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new d78160d  [TE] Aggregation function and double series aggregation 
mapping (#3999)
d78160d is described below

commit d78160dd855430ab1f03f00ac70029cb960196ae
Author: Jihao Zhang 
AuthorDate: Wed Mar 20 18:48:05 2019 -0700

[TE] Aggregation function and double series aggregation mapping (#3999)
---
 .../pinot/thirdeye/rootcause/timeseries/BaselineAggregateType.java  | 2 ++
 1 file changed, 2 insertions(+)

diff --git 
a/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/rootcause/timeseries/BaselineAggregateType.java
 
b/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/rootcause/timeseries/BaselineAggregateType.java
index 33eb37e..ce31112 100644
--- 
a/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/rootcause/timeseries/BaselineAggregateType.java
+++ 
b/thirdeye/thirdeye-pinot/src/main/java/org/apache/pinot/thirdeye/rootcause/timeseries/BaselineAggregateType.java
@@ -32,6 +32,8 @@ public enum BaselineAggregateType {
   SUM(DoubleSeries.SUM),
   PRODUCT(DoubleSeries.PRODUCT),
   MEAN(DoubleSeries.MEAN),
+  AVG(DoubleSeries.MEAN),
+  COUNT(DoubleSeries.SUM),
   MEDIAN(DoubleSeries.MEDIAN),
   MIN(DoubleSeries.MIN),
   MAX(DoubleSeries.MAX),


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] kishoreg commented on issue #3998: Upgrade to use Kafka release 2.1.1

2019-03-20 Thread GitBox
kishoreg commented on issue #3998: Upgrade to use  Kafka release 2.1.1
URL: 
https://github.com/apache/incubator-pinot/issues/3998#issuecomment-475082511
 
 
   I was thinking along the same lines. Let’s do that. That will allow us to 
make sure we are backwards compatible 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] mcvsubbu commented on issue #3998: Upgrade to use Kafka release 2.1.1

2019-03-20 Thread GitBox
mcvsubbu commented on issue #3998: Upgrade to use  Kafka release 2.1.1
URL: 
https://github.com/apache/incubator-pinot/issues/3998#issuecomment-475081698
 
 
   Sure, we can take all of the current Kafka code to a separate repo that 
depends on kafka=0.9.x and build other repos for newer Kafkas (if they change 
source compat). That should be easy to do since we have pluggable streams.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] kishoreg commented on issue #3998: Upgrade to use Kafka release 2.1.1

2019-03-20 Thread GitBox
kishoreg commented on issue #3998: Upgrade to use  Kafka release 2.1.1
URL: 
https://github.com/apache/incubator-pinot/issues/3998#issuecomment-475080724
 
 
   The first step would be decouple Pinot core from kafka right. Once we have 
that, we can add multiple implementations likafka, Kafka-0.9.x, Kafka-1.x 
kafka2.x. 
   
   This will be a good test/use of pluggable streaming source we built. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] mcvsubbu commented on issue #3998: Upgrade to use latest Kafka release

2019-03-20 Thread GitBox
mcvsubbu commented on issue #3998: Upgrade to use latest Kafka release
URL: 
https://github.com/apache/incubator-pinot/issues/3998#issuecomment-475079057
 
 
   Maybe we can consider moving to this
   https://github.com/linkedin/li-apache-kafka-clients if it is compatible


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: In TableConfig, add checks for mandatory fields (#3993)

2019-03-20 Thread jackie
This is an automated email from the ASF dual-hosted git repository.

jackie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new eccf573  In TableConfig, add checks for mandatory fields (#3993)
eccf573 is described below

commit eccf573a636de84e60c85cc331fea0afc172c90c
Author: Xiaotian (Jackie) Jiang <1751+jackie-ji...@users.noreply.github.com>
AuthorDate: Wed Mar 20 17:28:41 2019 -0700

In TableConfig, add checks for mandatory fields (#3993)

Add explicit checks for mandatory fields when serialize/deserialize table 
config
Without the explicit checks, it will throw NPE, which is not clear and hard 
to debug

Also change the serialize APIs to be non-static

Add unit test and integration test for the changes
---
 .../queryquota/TableQueryQuotaManagerTest.java |  15 +-
 .../broker/routing/TimeBoundaryServiceTest.java|   3 +-
 .../HighLevelConsumerRoutingTableBuilderTest.java  |   4 +-
 .../LowLevelConsumerRoutingTableBuilderTest.java   |  12 +-
 .../apache/pinot/common/config/TableConfig.java| 235 --
 .../pinot/common/config/TableConfigTest.java   | 336 -
 .../resources/PinotTableConfigRestletResource.java | 140 +
 .../api/resources/PinotTableRestletResource.java   |  12 +-
 .../helix/core/PinotHelixResourceManager.java  |  13 +-
 .../controller/util/AutoAddInvertedIndex.java  |   2 +-
 .../resources/PinotTableRestletResourceTest.java   |  34 +--
 .../resources/PinotTenantRestletResourceTest.java  |   2 +-
 .../helix/ControllerInstanceToggleTest.java|   2 +-
 .../controller/helix/ControllerSentinelTestV2.java |   2 +-
 .../pinot/hadoop/job/DefaultControllerRestApi.java |   2 +-
 .../pinot/hadoop/job/SegmentCreationJob.java   |   2 +-
 .../pinot/integration/tests/ClusterTest.java   |  11 +-
 .../tests/OfflineClusterIntegrationTest.java   |  18 ++
 .../tools/query/comparison/ClusterStarter.java |   2 +-
 19 files changed, 481 insertions(+), 366 deletions(-)

diff --git 
a/pinot-broker/src/test/java/org/apache/pinot/broker/queryquota/TableQueryQuotaManagerTest.java
 
b/pinot-broker/src/test/java/org/apache/pinot/broker/queryquota/TableQueryQuotaManagerTest.java
index 5ccd243..1b5d709 100644
--- 
a/pinot-broker/src/test/java/org/apache/pinot/broker/queryquota/TableQueryQuotaManagerTest.java
+++ 
b/pinot-broker/src/test/java/org/apache/pinot/broker/queryquota/TableQueryQuotaManagerTest.java
@@ -146,7 +146,7 @@ public class TableQueryQuotaManagerTest {
 
.setRetentionTimeUnit("DAYS").setRetentionTimeValue("1").setSegmentPushType("APPEND")
 
.setBrokerTenant("testBroker").setServerTenant("testServer").build();
 ZKMetadataProvider
-.setRealtimeTableConfig(_testPropertyStore, REALTIME_TABLE_NAME, 
TableConfig.toZnRecord(realtimeTableConfig));
+.setRealtimeTableConfig(_testPropertyStore, REALTIME_TABLE_NAME, 
realtimeTableConfig.toZNRecord());
 
 ExternalView brokerResource = generateBrokerResource(OFFLINE_TABLE_NAME);
 TableConfig tableConfig = generateDefaultTableConfig(OFFLINE_TABLE_NAME);
@@ -169,7 +169,7 @@ public class TableQueryQuotaManagerTest {
 
.setRetentionTimeUnit("DAYS").setRetentionTimeValue("1").setSegmentPushType("APPEND")
 
.setBrokerTenant("testBroker").setServerTenant("testServer").build();
 ZKMetadataProvider
-.setRealtimeTableConfig(_testPropertyStore, REALTIME_TABLE_NAME, 
TableConfig.toZnRecord(realtimeTableConfig));
+.setRealtimeTableConfig(_testPropertyStore, REALTIME_TABLE_NAME, 
realtimeTableConfig.toZNRecord());
 
 ExternalView brokerResource = generateBrokerResource(REALTIME_TABLE_NAME);
 TableConfig tableConfig = generateDefaultTableConfig(OFFLINE_TABLE_NAME);
@@ -205,9 +205,8 @@ public class TableQueryQuotaManagerTest {
 
.setBrokerTenant("testBroker").setServerTenant("testServer").build();
 
 ZKMetadataProvider
-.setRealtimeTableConfig(_testPropertyStore, REALTIME_TABLE_NAME, 
TableConfig.toZnRecord(realtimeTableConfig));
-ZKMetadataProvider
-.setOfflineTableConfig(_testPropertyStore, OFFLINE_TABLE_NAME, 
TableConfig.toZnRecord(offlineTableConfig));
+.setRealtimeTableConfig(_testPropertyStore, REALTIME_TABLE_NAME, 
realtimeTableConfig.toZNRecord());
+ZKMetadataProvider.setOfflineTableConfig(_testPropertyStore, 
OFFLINE_TABLE_NAME, offlineTableConfig.toZNRecord());
 
 // Since each table has 2 online brokers, per broker rate becomes 100.0 / 
2 = 50.0
 _tableQueryQuotaManager.initTableQueryQuota(offlineTableConfig, 
brokerResource);
@@ -261,8 +260,7 @@ public class TableQueryQuotaManagerTest {
 new 
TableConfig.Builder(TableType.OFFLINE).setTableName(RAW_TABLE_NAME).setQuotaConfig(quotaConfig)
 

[GitHub] [incubator-pinot] Jackie-Jiang merged pull request #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
Jackie-Jiang merged pull request #3993: In TableConfig, add checks for 
mandatory fields
URL: https://github.com/apache/incubator-pinot/pull/3993
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch table_config_mandatory_fields deleted (was 9bc06ed)

2019-03-20 Thread jackie
This is an automated email from the ASF dual-hosted git repository.

jackie pushed a change to branch table_config_mandatory_fields
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was 9bc06ed  In TableConfig, add checks for mandatory fields

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated: fixing unit test

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a commit to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/orc by this push:
 new 7f6263b  fixing unit test
7f6263b is described below

commit 7f6263be0737951dbafbe77e50c0fecd3398e2bc
Author: Jennifer Dai 
AuthorDate: Wed Mar 20 17:27:36 2019 -0700

fixing unit test
---
 .../java/org/apache/pinot/orc/data/readers/ORCRecordReader.java | 6 --
 .../java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git 
a/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 
b/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
index 02dac23..3c4c586 100644
--- 
a/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
+++ 
b/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
@@ -135,9 +135,11 @@ public class ORCRecordReader implements RecordReader {
   continue;
 }
 int currColRowIndex = currColumn.getId();
-ColumnVector vector = rowBatch.cols[currColRowIndex];
+// Struct is top level, so the id of the struct is 0. However, the 
children start from 1+, etc, so we need to
+// subtract one since the row batch we get has only children column 
vectors
+ColumnVector vector = rowBatch.cols[currColRowIndex - 1];
 // Previous value set to null, not used except to save allocation 
memory in OrcMapredRecordReader
-WritableComparable writableComparable = 
OrcMapredRecordReader.nextValue(vector, currColRowIndex, currColumn, null);
+WritableComparable writableComparable = 
OrcMapredRecordReader.nextValue(vector, 0, currColumn, null);
 genericRow.putField(currColumnName, getBaseObject(writableComparable));
   }
 } else {
diff --git 
a/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
 
b/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
index 6bea742..c96e55d 100644
--- 
a/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
+++ 
b/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
@@ -54,6 +54,7 @@ public class ORCRecordReaderTest {
 FileUtils.deleteQuietly(TEMP_DIR);
 TypeDescription schema =
 TypeDescription.fromString("struct");
+
 Writer writer = OrcFile.createWriter(new Path(ORC_FILE.getAbsolutePath()),
 OrcFile.writerOptions(new Configuration())
 .setSchema(schema));
@@ -103,7 +104,7 @@ public class ORCRecordReaderTest {
 
 for (int i = 0; i < genericRows.size(); i++) {
   Assert.assertEquals(genericRows.get(i).getValue("x"), i);
-  Assert.assertEquals(genericRows.get(i).getValue("y"), ("Last-" + (i * 
3)).getBytes(StandardCharsets.UTF_8));
+  Assert.assertEquals(genericRows.get(i).getValue("y"), "Last-" + (i * 3));
 }
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] jihaozh opened a new pull request #3999: [TE] Aggregation function and double series aggregation mapping

2019-03-20 Thread GitBox
jihaozh opened a new pull request #3999: [TE] Aggregation function and double 
series aggregation mapping
URL: https://github.com/apache/incubator-pinot/pull/3999
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] mcvsubbu opened a new issue #3998: Upgrade to use latest Kafka release

2019-03-20 Thread GitBox
mcvsubbu opened a new issue #3998: Upgrade to use latest Kafka release
URL: https://github.com/apache/incubator-pinot/issues/3998
 
 
   Kafka release has a stable version of 2.1.1 
(https://kafka.apache.org/downloads)
   
   Pinot should upgrade to a more recent version of Kafka
   
   This release is source incompatible with 0.9 (that Pinot currently uses).  
We need to:
   * Evaluate the compatiblity of new kafka client with older brokers. If they 
are not compatbile, then we should upgrade in steps getting to 2.1 eventually.
   * Ensure that new Kafka client still provides the functionality we rely on, 
and move Pinot to use new Kafka clients.
   
   Meanwhile, one solution is to use pluggable streams, but we need to make 
sure that there is no classpath collision and undesirable behavior


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated: Adding ORC Reader Test

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a commit to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/orc by this push:
 new 91767a1  Adding ORC Reader Test
91767a1 is described below

commit 91767a15e7f02bc7277dc8acd144bf490414ece4
Author: Jennifer Dai 
AuthorDate: Wed Mar 20 16:23:51 2019 -0700

Adding ORC Reader Test
---
 .../pinot/orc/data/readers/ORCRecordReader.java|  10 +-
 .../orc/data/readers/ORCRecordReaderTest.java  | 115 +
 2 files changed, 121 insertions(+), 4 deletions(-)

diff --git 
a/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 
b/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
index b42262a..02dac23 100644
--- 
a/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
+++ 
b/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
@@ -64,13 +64,15 @@ public class ORCRecordReader implements RecordReader {
   org.apache.orc.RecordReader _recordReader;
   VectorizedRowBatch _reusableVectorizedRowBatch;
 
+  public static final String LOCAL_FS_PREFIX = "file://";
+
   private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
 
   private void init(String inputPath, Schema schema) {
 Configuration conf = new Configuration();
 LOGGER.info("Creating segment for {}", inputPath);
 try {
-  Path orcReaderPath = new Path("file://" + inputPath);
+  Path orcReaderPath = new Path(LOCAL_FS_PREFIX + inputPath);
   LOGGER.info("orc reader path is {}", orcReaderPath);
   _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
   _orcSchema = _reader.getSchema();
@@ -119,7 +121,7 @@ public class ORCRecordReader implements RecordReader {
 return reuse;
   }
 
-  private void fillGenericRow(GenericRow genericRow, VectorizedRowBatch 
rowBatch) throws IOException {
+  private void fillGenericRow(GenericRow genericRow, VectorizedRowBatch 
rowBatch) {
 // ORC's TypeDescription is the equivalent of a schema. The way we will 
support ORC in Pinot
 // will be to get the top level struct that contains all our fields and 
look through its
 // children to determine the fields in our schemas.
@@ -127,7 +129,7 @@ public class ORCRecordReader implements RecordReader {
   for (int i = 0; i < _orcSchema.getChildren().size(); i++) {
 // Get current column in schema
 TypeDescription currColumn = _orcSchema.getChildren().get(i);
-String currColumnName = currColumn.getFieldNames().get(0);
+String currColumnName = _orcSchema.getFieldNames().get(i);
 if (!_pinotSchema.getColumnNames().contains(currColumnName)) {
   LOGGER.warn("Skipping column {} because it is not in pinot schema", 
currColumnName);
   continue;
@@ -135,7 +137,7 @@ public class ORCRecordReader implements RecordReader {
 int currColRowIndex = currColumn.getId();
 ColumnVector vector = rowBatch.cols[currColRowIndex];
 // Previous value set to null, not used except to save allocation 
memory in OrcMapredRecordReader
-WritableComparable writableComparable = 
OrcMapredRecordReader.nextValue(vector, currColRowIndex, _orcSchema, null);
+WritableComparable writableComparable = 
OrcMapredRecordReader.nextValue(vector, currColRowIndex, currColumn, null);
 genericRow.putField(currColumnName, getBaseObject(writableComparable));
   }
 } else {
diff --git 
a/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
 
b/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
new file mode 100644
index 000..6bea742
--- /dev/null
+++ 
b/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
@@ -0,0 +1,115 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.File;
+import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.List;

[GitHub] [incubator-pinot] weiwanning commented on issue #3513: DATETIMECONVERT udf does not work for customized timezone and bucket size > 1 day

2019-03-20 Thread GitBox
weiwanning commented on issue #3513:  DATETIMECONVERT udf does not work for 
customized timezone and bucket size > 1 day
URL: 
https://github.com/apache/incubator-pinot/issues/3513#issuecomment-475067265
 
 
   hey @npawar, went into the same issue, option2 seems ideal for my use 
case(better to make Mon/Sun/... as an option), also could we support MONTHS and 
YEARS together with WEEKS?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] Jackie-Jiang commented on a change in pull request #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
Jackie-Jiang commented on a change in pull request #3993: In TableConfig, add 
checks for mandatory fields
URL: https://github.com/apache/incubator-pinot/pull/3993#discussion_r267584320
 
 

 ##
 File path: 
pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/OfflineClusterIntegrationTest.java
 ##
 @@ -150,6 +152,22 @@ public void testInstancesStarted() {
 }
   }
 
+  @Test
+  public void testInvalidTableConfig() {
+TableConfig tableConfig =
+new 
TableConfig.Builder(CommonConstants.Helix.TableType.OFFLINE).setTableName("badTable").build();
+ObjectNode jsonConfig = tableConfig.toJsonConfig();
+// Remove a mandatory field
+jsonConfig.remove(TableConfig.VALIDATION_CONFIG_KEY);
+try {
+  sendPostRequest(_controllerRequestURLBuilder.forTableCreate(), 
jsonConfig.toString());
+  fail();
+} catch (IOException e) {
+  // Should get response code 400 (BAD_REQUEST)
+  assertTrue(e.getMessage().startsWith("Server returned HTTP response 
code: 400"));
 
 Review comment:
   I understand your concern, but on this level we don't have access to the 
message. I added checks on the message in the unit test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch table_config_mandatory_fields updated (b57dec9 -> 9bc06ed)

2019-03-20 Thread jackie
This is an automated email from the ASF dual-hosted git repository.

jackie pushed a change to branch table_config_mandatory_fields
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard b57dec9  In TableConfig, add checks for mandatory fields
 new 9bc06ed  In TableConfig, add checks for mandatory fields

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (b57dec9)
\
 N -- N -- N   refs/heads/table_config_mandatory_fields (9bc06ed)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5859 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/pinot/common/config/TableConfig.java| 42 --
 .../pinot/common/config/TableConfigTest.java   | 51 --
 2 files changed, 47 insertions(+), 46 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3993: In TableConfig, add 
checks for mandatory fields
URL: https://github.com/apache/incubator-pinot/pull/3993#discussion_r267582170
 
 

 ##
 File path: 
pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/OfflineClusterIntegrationTest.java
 ##
 @@ -150,6 +152,22 @@ public void testInstancesStarted() {
 }
   }
 
+  @Test
+  public void testInvalidTableConfig() {
+TableConfig tableConfig =
+new 
TableConfig.Builder(CommonConstants.Helix.TableType.OFFLINE).setTableName("badTable").build();
+ObjectNode jsonConfig = tableConfig.toJsonConfig();
+// Remove a mandatory field
+jsonConfig.remove(TableConfig.VALIDATION_CONFIG_KEY);
+try {
+  sendPostRequest(_controllerRequestURLBuilder.forTableCreate(), 
jsonConfig.toString());
+  fail();
+} catch (IOException e) {
+  // Should get response code 400 (BAD_REQUEST)
+  assertTrue(e.getMessage().startsWith("Server returned HTTP response 
code: 400"));
 
 Review comment:
   Can we assert on the rest of the message? (If we don't have access to the 
message at this level, ignore)
   
   The reason is, currently the client does get a 400, but the message is 
"null" which makes it hard to debug.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch nested-object-indexing updated (c2d822d -> 3140bb9)

2019-03-20 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch nested-object-indexing
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard c2d822d  fixing license header
 discard 635c451  Adding support for bytes type in realtime + nested object 
indexing
 discard 946499e  Wiring up end to end to support indexing nested fields on 
complex objects
 discard 9c09912e Adding support for Object Type
 discard 1f6b4c6  Enhancing PQL to support MATCHES predicate, can be used for 
searching within text, map, json and other complex objects
 discard 9099d30  Adding support for MATCHES Predicate
 new 581464e  Spelling correction (#3977)
 new 636c6c1  [TE] frontend - harleyjj/home - set default date picker to 
yesterday (#3976)
 new 67b729d  [TE] yaml - more validation on max duration (#3982)
 new 9e8e373  [TE] detection - preview a yaml with existing anomalies 
(#3983)
 new 565171f  add config to control kafka fetcher size and increase default 
(#3869)
 new f815e2e  [TE] detection - align metric slices (#3981)
 new df62374  [TE] frontend - harleyjj/preview - default preview to 2 days 
to accomodate daily metrics (#3980)
 new 2c5d42a  [TE] frontend - harleyjj/report-anomaly - adds back 
report-anomaly modal to alert overview (#3985)
 new d2a3d84  [TE] Fix for delayed anomalies due to watermark bug (#3984)
 new fe203b5  Pinot server side change to optimize LLC segment completion 
with direct metadata upload.  (#3941)
 new f26b2f3  [TE] Remove deprecated legacy logic in user dashboard (#3988)
 new 31f4fd0  [TE] frontend - harleyjj/edit-alert - update endpoint for 
preview when editing alert (#3987)
 new 59fd4aa  Add documentation (#3986)
 new 98dcebc  Add experiment section in getting started (#3989)
 new 205ec50  Update managing pinot doc (#3991)
 new d8061f3  [TE] frontend - harleyjj/edit-alert - fix subscription group 
put bug (#3995)
 new 6eb8e79  Fixing type casting issue for BYTES type values during 
realtime segment persistence (#3992)
 new d78a807  [TE] Clean up the yaml editor calls and messages (#3996)
 new e8ac3b3  Adding support for MATCHES Predicate
 new 35eace3  Enhancing PQL to support MATCHES predicate, can be used for 
searching within text, map, json and other complex objects
 new 0b2ddf0  Adding support for Object Type
 new 5e394ca  Wiring up end to end to support indexing nested fields on 
complex objects
 new 2bcdfef  Adding support for bytes type in realtime + nested object 
indexing
 new 48f5b40  fixing license header
 new 3140bb9  Adding simple avro msg decoder which could read avro schema 
from table creation config

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c2d822d)
\
 N -- N -- N   refs/heads/nested-object-indexing (3140bb9)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5870 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/client_api.rst|   2 +-
 docs/getting_started.rst   | 156 +
 docs/img/generate-segment.png  | Bin 218597 -> 0 bytes
 docs/img/list-schemas.png  | Bin 8952 -> 247946 bytes
 docs/img/pinot-console.png | Bin 0 -> 157310 bytes
 docs/img/query-table.png   | Bin 35914 -> 0 bytes
 docs/img/rebalance-table.png   | Bin 0 -> 164989 bytes
 docs/img/upload-segment.png| Bin 13944 -> 0 bytes
 docs/management_api.rst|  67 +++--
 .../protocols/SegmentCompletionProtocol.java   |   6 +
 .../apache/pinot/common/utils/CommonConstants.java |   1 +
 .../common/utils/FileUploadDownloadClient.java |  24 
 .../manager/config/InstanceDataManagerConfig.java  |   2 +
 .../realtime/LLRealtimeSegmentDataManager.java |  48 ++-
 .../converter/stats/RealtimeColumnStatistics.java  |  20 ++-
 .../impl/kafka/KafkaConnectionHandler.java |  26 ++--
 .../impl/kafka/KafkaLowLevelStreamConfig.java  |  34 -
 .../impl/kafka/KafkaPartitionLevelConsumer.java|  21 ++-
 .../impl/kafka/KafkaStreamConfigProperties.java|   3 +
 

[GitHub] [incubator-pinot] Jackie-Jiang commented on a change in pull request #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
Jackie-Jiang commented on a change in pull request #3993: In TableConfig, add 
checks for mandatory fields
URL: https://github.com/apache/incubator-pinot/pull/3993#discussion_r267580774
 
 

 ##
 File path: 
pinot-common/src/main/java/org/apache/pinot/common/config/TableConfig.java
 ##
 @@ -118,14 +118,24 @@ public static TableConfig fromJsonString(String 
jsonString)
   @Nonnull
   public static TableConfig fromJSONConfig(@Nonnull JsonNode jsonConfig)
   throws IOException {
+// Mandatory fields
+Preconditions.checkState(jsonConfig.has(TABLE_TYPE_KEY), "Table type is 
missing");
 TableType tableType = 
TableType.valueOf(jsonConfig.get(TABLE_TYPE_KEY).asText().toUpperCase());
+Preconditions.checkState(jsonConfig.has(TABLE_NAME_KEY), "Table name is 
missing");
 String tableName = 
TableNameBuilder.forType(tableType).tableNameWithType(jsonConfig.get(TABLE_NAME_KEY).asText());
-
+Preconditions
+.checkState(jsonConfig.has(VALIDATION_CONFIG_KEY), "Mandatory config 
'%s' is missing", VALIDATION_CONFIG_KEY);
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] snleee commented on a change in pull request #3942: Set processingException when all queried segments cannot be acquired

2019-03-20 Thread GitBox
snleee commented on a change in pull request #3942: Set processingException 
when all queried segments cannot be acquired
URL: https://github.com/apache/incubator-pinot/pull/3942#discussion_r267578867
 
 

 ##
 File path: 
pinot-core/src/main/java/org/apache/pinot/core/data/manager/TableDataManager.java
 ##
 @@ -79,6 +79,16 @@ void addSegment(@Nonnull String segmentName, @Nonnull 
TableConfig tableConfig,
*/
   void removeSegment(@Nonnull String segmentName);
 
+  /**
+   * Track a deleted segment.
 
 Review comment:
   can we add `@param` here to follow javadoc convention?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch table_config_mandatory_fields updated (a409684 -> b57dec9)

2019-03-20 Thread jackie
This is an automated email from the ASF dual-hosted git repository.

jackie pushed a change to branch table_config_mandatory_fields
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard a409684  In TableConfig, add checks for mandatory fields
 new b57dec9  In TableConfig, add checks for mandatory fields

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (a409684)
\
 N -- N -- N   refs/heads/table_config_mandatory_fields (b57dec9)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5859 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../queryquota/TableQueryQuotaManagerTest.java |  15 +-
 .../broker/routing/TimeBoundaryServiceTest.java|   3 +-
 .../HighLevelConsumerRoutingTableBuilderTest.java  |   4 +-
 .../LowLevelConsumerRoutingTableBuilderTest.java   |  12 +-
 .../apache/pinot/common/config/TableConfig.java| 229 +++--
 .../pinot/common/config/TableConfigTest.java   | 201 +-
 .../resources/PinotTableConfigRestletResource.java | 140 +++--
 .../api/resources/PinotTableRestletResource.java   |  12 +-
 .../helix/core/PinotHelixResourceManager.java  |  13 +-
 .../controller/util/AutoAddInvertedIndex.java  |   2 +-
 .../resources/PinotTableRestletResourceTest.java   |  34 +--
 .../resources/PinotTenantRestletResourceTest.java  |   2 +-
 .../helix/ControllerInstanceToggleTest.java|   2 +-
 .../controller/helix/ControllerSentinelTestV2.java |   2 +-
 .../pinot/hadoop/job/DefaultControllerRestApi.java |   2 +-
 .../pinot/hadoop/job/SegmentCreationJob.java   |   2 +-
 .../pinot/integration/tests/ClusterTest.java   |  11 +-
 .../tests/OfflineClusterIntegrationTest.java   |  18 ++
 .../tools/query/comparison/ClusterStarter.java |   2 +-
 19 files changed, 360 insertions(+), 346 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated (d9b0bce -> d6c3a27)

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a change to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard d9b0bce  Addressing comments
 discard e09e5fd  Addressing comments
 new d6c3a27  Addressing comments

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (d9b0bce)
\
 N -- N -- N   refs/heads/orc (d6c3a27)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5856 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../pinot/orc/data/readers/ORCRecordReader.java|  2 +-
 .../orc/data/readers/ORCRecordReaderTest.java  | 81 --
 2 files changed, 1 insertion(+), 82 deletions(-)
 delete mode 100644 
pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated: Addressing comments

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a commit to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/orc by this push:
 new d9b0bce  Addressing comments
d9b0bce is described below

commit d9b0bcec4ed373b92ddc6feb11835b5bb0287b31
Author: Jennifer Dai 
AuthorDate: Wed Mar 20 15:41:24 2019 -0700

Addressing comments
---
 .../pinot/orc/data/readers/ORCRecordReader.java| 33 +++--
 .../orc/data/readers/ORCRecordReaderTest.java  | 81 ++
 2 files changed, 90 insertions(+), 24 deletions(-)

diff --git 
a/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 
b/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
index 5fce029..4e5a8b3 100644
--- 
a/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
+++ 
b/pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
@@ -19,7 +19,6 @@ package org.apache.pinot.orc.data.readers;
  * under the License.
  */
 
-import com.google.common.annotations.VisibleForTesting;
 import java.io.IOException;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
@@ -67,9 +66,9 @@ public class ORCRecordReader implements RecordReader {
 
   private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
 
-  @VisibleForTesting
-  public ORCRecordReader(String inputPath) {
+  private void init(String inputPath, Schema schema) {
 Configuration conf = new Configuration();
+LOGGER.info("Creating segment for {}", inputPath);
 try {
   Path orcReaderPath = new Path("file://" + inputPath);
   LOGGER.info("orc reader path is {}", orcReaderPath);
@@ -77,32 +76,13 @@ public class ORCRecordReader implements RecordReader {
   _orcSchema = _reader.getSchema();
   LOGGER.info("ORC schema is {}", _orcSchema.toJson());
 
-  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
-} catch (Exception e) {
-  throw new RuntimeException(e);
-}
-
-_reusableVectorizedRowBatch = _orcSchema.createRowBatch(1);
-  }
-
-  @Override
-  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
-Configuration conf = new Configuration();
-LOGGER.info("Creating segment for {}", 
segmentGeneratorConfig.getInputFilePath());
-try {
-  Path orcReaderPath = new Path("file://" + 
segmentGeneratorConfig.getInputFilePath());
-  LOGGER.info("orc reader path is {}", orcReaderPath);
-  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
-  _orcSchema = _reader.getSchema();
-  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
-
-  _pinotSchema = segmentGeneratorConfig.getSchema();
+  _pinotSchema = schema;
   if (_pinotSchema == null) {
 LOGGER.warn("Pinot schema is not set in segment generator config");
   }
   _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
 } catch (Exception e) {
-  LOGGER.error("Caught exception initializing record reader at path {}", 
segmentGeneratorConfig.getInputFilePath());
+  LOGGER.error("Caught exception initializing record reader at path {}", 
inputPath);
   throw new RuntimeException(e);
 }
 
@@ -111,6 +91,11 @@ public class ORCRecordReader implements RecordReader {
   }
 
   @Override
+  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
+init(segmentGeneratorConfig.getInputFilePath(), 
segmentGeneratorConfig.getSchema());
+  }
+
+  @Override
   public boolean hasNext() {
 try {
   return _recordReader.getProgress() != 1;
diff --git 
a/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
 
b/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
new file mode 100644
index 000..6f5965f
--- /dev/null
+++ 
b/pinot-orc/src/test/java/org/apache/pinot/orc/data/readers/ORCRecordReaderTest.java
@@ -0,0 +1,81 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import 

[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267574579
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -0,0 +1,218 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import com.google.common.annotations.VisibleForTesting;
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.ByteWritable;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.ShortWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.mapred.OrcList;
+import org.apache.orc.mapred.OrcMapredRecordReader;
+import org.apache.pinot.common.data.Schema;
+import org.apache.pinot.core.data.GenericRow;
+import org.apache.pinot.core.data.readers.RecordReader;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * The ORCRecordReader uses a VectorizedRowBatch, which we convert to a 
Writable. Then, we convert these
+ * Writable objects to primitives that we can then store in a GenericRow.
+ *
+ * When new data types are added to Pinot, we will need to update them here as 
well.
+ * Note that not all ORC types are supported; we only support the ORC types 
that correspond to either
+ * primitives or multivalue columns in Pinot, which is similar to other record 
readers.
+ */
+public class ORCRecordReader implements RecordReader {
+
+  private Schema _pinotSchema;
+  private TypeDescription _orcSchema;
+  Reader _reader;
+  org.apache.orc.RecordReader _recordReader;
+  VectorizedRowBatch _reusableVectorizedRowBatch;
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
+
+  @VisibleForTesting
+  public ORCRecordReader(String inputPath) {
+Configuration conf = new Configuration();
+try {
+  Path orcReaderPath = new Path("file://" + inputPath);
+  LOGGER.info("orc reader path is {}", orcReaderPath);
+  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
+  _orcSchema = _reader.getSchema();
+  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
+
+  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
+} catch (Exception e) {
+  throw new RuntimeException(e);
+}
+
+_reusableVectorizedRowBatch = _orcSchema.createRowBatch(1);
+  }
+
+  @Override
+  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
+Configuration conf = new Configuration();
+LOGGER.info("Creating segment for {}", 
segmentGeneratorConfig.getInputFilePath());
+try {
+  Path orcReaderPath = new Path("file://" + 
segmentGeneratorConfig.getInputFilePath());
+  LOGGER.info("orc reader path is {}", orcReaderPath);
+  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
+  _orcSchema = _reader.getSchema();
+  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
+
+  _pinotSchema = segmentGeneratorConfig.getSchema();
+  if (_pinotSchema == null) {
+LOGGER.warn("Pinot schema is not set in segment generator config");
+  }
+  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
+} catch (Exception e) {
+  LOGGER.error("Caught exception initializing record reader at path {}", 
segmentGeneratorConfig.getInputFilePath());
+  throw new RuntimeException(e);
+}
+
+// Create a row batch with max size 1
+_reusableVectorizedRowBatch 

[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267574279
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -58,6 +67,24 @@
 
   private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
 
+  @VisibleForTesting
+  public ORCRecordReader(String inputPath) {
+Configuration conf = new Configuration();
+try {
 
 Review comment:
   Add a init() that takes a path instead of segment-generator config (it can 
be a private method). Have this constructor and the public 
init(SegmentGeneratorConfig) use that. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated (d620318 -> e09e5fd)

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a change to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from d620318  Editing reusable vectorized row batch to really be reusable
 new 2e6bf2f  Fixing dependencies and updating javadoc
 new e09e5fd  Addressing comments

The 5856 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pinot-orc/pom.xml  | 13 ++
 .../pinot/orc/data/readers/ORCRecordReader.java| 47 ++
 2 files changed, 52 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] jenniferdai commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
jenniferdai commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267569198
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -0,0 +1,187 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.ByteWritable;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.ShortWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.mapred.OrcList;
+import org.apache.orc.mapred.OrcMapredRecordReader;
+import org.apache.pinot.common.data.Schema;
+import org.apache.pinot.core.data.GenericRow;
+import org.apache.pinot.core.data.readers.RecordReader;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public class ORCRecordReader implements RecordReader {
+
+  private Schema _pinotSchema;
+  private TypeDescription _orcSchema;
+  Reader _reader;
+  org.apache.orc.RecordReader _recordReader;
+  VectorizedRowBatch _reusableVectorizedRowBatch;
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
+
+  @Override
+  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
+Configuration conf = new Configuration();
+LOGGER.info("Creating segment for {}", 
segmentGeneratorConfig.getInputFilePath());
+try {
+  Path orcReaderPath = new Path("file://" + 
segmentGeneratorConfig.getInputFilePath());
+  LOGGER.info("orc reader path is {}", orcReaderPath);
+  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
+  _orcSchema = _reader.getSchema();
+  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
+
+  _pinotSchema = segmentGeneratorConfig.getSchema();
+  if (_pinotSchema == null) {
+throw new IllegalArgumentException("ORCRecordReader requires schema");
+  }
+  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
+} catch (Exception e) {
+  LOGGER.error("Caught exception initializing record reader at path {}", 
segmentGeneratorConfig.getInputFilePath());
+  throw new RuntimeException(e);
+}
+
+_reusableVectorizedRowBatch = _orcSchema.createRowBatch(1);
+  }
+
+  @Override
+  public boolean hasNext() {
+try {
+  return _recordReader.getProgress() != 1;
+} catch (IOException e) {
+  LOGGER.error("Could not get next record");
+  throw new RuntimeException(e);
+}
+  }
+
+  @Override
+  public GenericRow next()
+  throws IOException {
+return next(new GenericRow());
+  }
+
+  @Override
+  public GenericRow next(GenericRow reuse)
+  throws IOException {
+_recordReader.nextBatch(_reusableVectorizedRowBatch);
+fillGenericRow(reuse, _reusableVectorizedRowBatch);
+return reuse;
+  }
+
+  private void fillGenericRow(GenericRow genericRow, VectorizedRowBatch 
rowBatch) throws IOException {
+// Read the row data
+TypeDescription schema = _reader.getSchema();
+// Create a row batch with max size 1
+
+if (schema.getCategory().equals(TypeDescription.Category.STRUCT)) {
+  for (int i = 0; i < schema.getChildren().size(); i++) {
+// Get current column in schema
+TypeDescription currColumn = schema.getChildren().get(i);
+String currColumnName = currColumn.getFieldNames().get(0);
+int currColRowIndex = currColumn.getId();
+ColumnVector vector = 

[GitHub] [incubator-pinot] mcvsubbu opened a new pull request #3997: Added maxUsableHostMemory argument to realtime provision helper.

2019-03-20 Thread GitBox
mcvsubbu opened a new pull request #3997: Added maxUsableHostMemory argument to 
realtime provision helper.
URL: https://github.com/apache/incubator-pinot/pull/3997
 
 
   
   In installations where hosts have memory limits other than 48G, the admin can
   specify other memory limits to be used to provision realtime hosts


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267564576
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -0,0 +1,187 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.ByteWritable;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.ShortWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.mapred.OrcList;
+import org.apache.orc.mapred.OrcMapredRecordReader;
+import org.apache.pinot.common.data.Schema;
+import org.apache.pinot.core.data.GenericRow;
+import org.apache.pinot.core.data.readers.RecordReader;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public class ORCRecordReader implements RecordReader {
+
+  private Schema _pinotSchema;
+  private TypeDescription _orcSchema;
+  Reader _reader;
+  org.apache.orc.RecordReader _recordReader;
+  VectorizedRowBatch _reusableVectorizedRowBatch;
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
+
+  @Override
+  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
+Configuration conf = new Configuration();
+LOGGER.info("Creating segment for {}", 
segmentGeneratorConfig.getInputFilePath());
+try {
+  Path orcReaderPath = new Path("file://" + 
segmentGeneratorConfig.getInputFilePath());
+  LOGGER.info("orc reader path is {}", orcReaderPath);
+  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
+  _orcSchema = _reader.getSchema();
+  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
+
+  _pinotSchema = segmentGeneratorConfig.getSchema();
+  if (_pinotSchema == null) {
+throw new IllegalArgumentException("ORCRecordReader requires schema");
+  }
+  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
+} catch (Exception e) {
+  LOGGER.error("Caught exception initializing record reader at path {}", 
segmentGeneratorConfig.getInputFilePath());
+  throw new RuntimeException(e);
+}
+
+_reusableVectorizedRowBatch = _orcSchema.createRowBatch(1);
+  }
+
+  @Override
+  public boolean hasNext() {
+try {
+  return _recordReader.getProgress() != 1;
+} catch (IOException e) {
+  LOGGER.error("Could not get next record");
+  throw new RuntimeException(e);
+}
+  }
+
+  @Override
+  public GenericRow next()
+  throws IOException {
+return next(new GenericRow());
+  }
+
+  @Override
+  public GenericRow next(GenericRow reuse)
+  throws IOException {
+_recordReader.nextBatch(_reusableVectorizedRowBatch);
+fillGenericRow(reuse, _reusableVectorizedRowBatch);
+return reuse;
+  }
+
+  private void fillGenericRow(GenericRow genericRow, VectorizedRowBatch 
rowBatch) throws IOException {
+// Read the row data
+TypeDescription schema = _reader.getSchema();
+// Create a row batch with max size 1
 
 Review comment:
   Move this comment to line 81 (where you are allocating the row-batch)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:

[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267563317
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -0,0 +1,187 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.ByteWritable;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.ShortWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.mapred.OrcList;
+import org.apache.orc.mapred.OrcMapredRecordReader;
+import org.apache.pinot.common.data.Schema;
+import org.apache.pinot.core.data.GenericRow;
+import org.apache.pinot.core.data.readers.RecordReader;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public class ORCRecordReader implements RecordReader {
+
+  private Schema _pinotSchema;
+  private TypeDescription _orcSchema;
+  Reader _reader;
+  org.apache.orc.RecordReader _recordReader;
+  VectorizedRowBatch _reusableVectorizedRowBatch;
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
+
+  @Override
+  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
+Configuration conf = new Configuration();
+LOGGER.info("Creating segment for {}", 
segmentGeneratorConfig.getInputFilePath());
+try {
+  Path orcReaderPath = new Path("file://" + 
segmentGeneratorConfig.getInputFilePath());
+  LOGGER.info("orc reader path is {}", orcReaderPath);
+  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
+  _orcSchema = _reader.getSchema();
+  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
+
+  _pinotSchema = segmentGeneratorConfig.getSchema();
+  if (_pinotSchema == null) {
+throw new IllegalArgumentException("ORCRecordReader requires schema");
+  }
+  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
+} catch (Exception e) {
+  LOGGER.error("Caught exception initializing record reader at path {}", 
segmentGeneratorConfig.getInputFilePath());
+  throw new RuntimeException(e);
+}
+
+_reusableVectorizedRowBatch = _orcSchema.createRowBatch(1);
+  }
+
+  @Override
+  public boolean hasNext() {
+try {
+  return _recordReader.getProgress() != 1;
+} catch (IOException e) {
+  LOGGER.error("Could not get next record");
+  throw new RuntimeException(e);
+}
+  }
+
+  @Override
+  public GenericRow next()
+  throws IOException {
+return next(new GenericRow());
+  }
+
+  @Override
+  public GenericRow next(GenericRow reuse)
+  throws IOException {
+_recordReader.nextBatch(_reusableVectorizedRowBatch);
+fillGenericRow(reuse, _reusableVectorizedRowBatch);
+return reuse;
+  }
+
+  private void fillGenericRow(GenericRow genericRow, VectorizedRowBatch 
rowBatch) throws IOException {
+// Read the row data
+TypeDescription schema = _reader.getSchema();
+// Create a row batch with max size 1
+
+if (schema.getCategory().equals(TypeDescription.Category.STRUCT)) {
+  for (int i = 0; i < schema.getChildren().size(); i++) {
+// Get current column in schema
+TypeDescription currColumn = schema.getChildren().get(i);
+String currColumnName = currColumn.getFieldNames().get(0);
+int currColRowIndex = currColumn.getId();
+ColumnVector vector 

[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267561974
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -0,0 +1,187 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.ByteWritable;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.ShortWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.mapred.OrcList;
+import org.apache.orc.mapred.OrcMapredRecordReader;
+import org.apache.pinot.common.data.Schema;
+import org.apache.pinot.core.data.GenericRow;
+import org.apache.pinot.core.data.readers.RecordReader;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public class ORCRecordReader implements RecordReader {
+
+  private Schema _pinotSchema;
+  private TypeDescription _orcSchema;
+  Reader _reader;
+  org.apache.orc.RecordReader _recordReader;
+  VectorizedRowBatch _reusableVectorizedRowBatch;
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
+
+  @Override
+  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
+Configuration conf = new Configuration();
+LOGGER.info("Creating segment for {}", 
segmentGeneratorConfig.getInputFilePath());
+try {
+  Path orcReaderPath = new Path("file://" + 
segmentGeneratorConfig.getInputFilePath());
+  LOGGER.info("orc reader path is {}", orcReaderPath);
+  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
+  _orcSchema = _reader.getSchema();
+  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
+
+  _pinotSchema = segmentGeneratorConfig.getSchema();
+  if (_pinotSchema == null) {
+throw new IllegalArgumentException("ORCRecordReader requires schema");
+  }
+  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
+} catch (Exception e) {
+  LOGGER.error("Caught exception initializing record reader at path {}", 
segmentGeneratorConfig.getInputFilePath());
+  throw new RuntimeException(e);
+}
+
+_reusableVectorizedRowBatch = _orcSchema.createRowBatch(1);
+  }
+
+  @Override
+  public boolean hasNext() {
+try {
+  return _recordReader.getProgress() != 1;
+} catch (IOException e) {
+  LOGGER.error("Could not get next record");
+  throw new RuntimeException(e);
+}
+  }
+
+  @Override
+  public GenericRow next()
+  throws IOException {
+return next(new GenericRow());
+  }
+
+  @Override
+  public GenericRow next(GenericRow reuse)
+  throws IOException {
+_recordReader.nextBatch(_reusableVectorizedRowBatch);
+fillGenericRow(reuse, _reusableVectorizedRowBatch);
+return reuse;
+  }
+
+  private void fillGenericRow(GenericRow genericRow, VectorizedRowBatch 
rowBatch) throws IOException {
+// Read the row data
+TypeDescription schema = _reader.getSchema();
 
 Review comment:
   We can just use _orcSchema here right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267562665
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -0,0 +1,187 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.ByteWritable;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.ShortWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.mapred.OrcList;
+import org.apache.orc.mapred.OrcMapredRecordReader;
+import org.apache.pinot.common.data.Schema;
+import org.apache.pinot.core.data.GenericRow;
+import org.apache.pinot.core.data.readers.RecordReader;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public class ORCRecordReader implements RecordReader {
+
+  private Schema _pinotSchema;
+  private TypeDescription _orcSchema;
+  Reader _reader;
+  org.apache.orc.RecordReader _recordReader;
+  VectorizedRowBatch _reusableVectorizedRowBatch;
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(ORCRecordReader.class);
+
+  @Override
+  public void init(SegmentGeneratorConfig segmentGeneratorConfig) {
+Configuration conf = new Configuration();
+LOGGER.info("Creating segment for {}", 
segmentGeneratorConfig.getInputFilePath());
+try {
+  Path orcReaderPath = new Path("file://" + 
segmentGeneratorConfig.getInputFilePath());
+  LOGGER.info("orc reader path is {}", orcReaderPath);
+  _reader = OrcFile.createReader(orcReaderPath, 
OrcFile.readerOptions(conf));
+  _orcSchema = _reader.getSchema();
+  LOGGER.info("ORC schema is {}", _orcSchema.toJson());
+
+  _pinotSchema = segmentGeneratorConfig.getSchema();
+  if (_pinotSchema == null) {
+throw new IllegalArgumentException("ORCRecordReader requires schema");
+  }
+  _recordReader = _reader.rows(_reader.options().schema(_orcSchema));
+} catch (Exception e) {
+  LOGGER.error("Caught exception initializing record reader at path {}", 
segmentGeneratorConfig.getInputFilePath());
+  throw new RuntimeException(e);
+}
+
+_reusableVectorizedRowBatch = _orcSchema.createRowBatch(1);
+  }
+
+  @Override
+  public boolean hasNext() {
+try {
+  return _recordReader.getProgress() != 1;
+} catch (IOException e) {
+  LOGGER.error("Could not get next record");
+  throw new RuntimeException(e);
+}
+  }
+
+  @Override
+  public GenericRow next()
+  throws IOException {
+return next(new GenericRow());
+  }
+
+  @Override
+  public GenericRow next(GenericRow reuse)
+  throws IOException {
+_recordReader.nextBatch(_reusableVectorizedRowBatch);
+fillGenericRow(reuse, _reusableVectorizedRowBatch);
+return reuse;
+  }
+
+  private void fillGenericRow(GenericRow genericRow, VectorizedRowBatch 
rowBatch) throws IOException {
+// Read the row data
+TypeDescription schema = _reader.getSchema();
+// Create a row batch with max size 1
+
+if (schema.getCategory().equals(TypeDescription.Category.STRUCT)) {
 
 Review comment:
   Might be good to explain that ORC record's TypeDescription is nested with a 
struct field containing the rest of the "schema".


This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994#discussion_r267545282
 
 

 ##
 File path: 
pinot-orc/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java
 ##
 @@ -0,0 +1,187 @@
+package org.apache.pinot.orc.data.readers;
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hive.ql.exec.vector.ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.io.BooleanWritable;
+import org.apache.hadoop.io.ByteWritable;
+import org.apache.hadoop.io.BytesWritable;
+import org.apache.hadoop.io.DoubleWritable;
+import org.apache.hadoop.io.FloatWritable;
+import org.apache.hadoop.io.IntWritable;
+import org.apache.hadoop.io.LongWritable;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.ShortWritable;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.io.WritableComparable;
+import org.apache.orc.OrcFile;
+import org.apache.orc.Reader;
+import org.apache.orc.TypeDescription;
+import org.apache.orc.mapred.OrcList;
+import org.apache.orc.mapred.OrcMapredRecordReader;
+import org.apache.pinot.common.data.Schema;
+import org.apache.pinot.core.data.GenericRow;
+import org.apache.pinot.core.data.readers.RecordReader;
+import org.apache.pinot.core.indexsegment.generator.SegmentGeneratorConfig;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public class ORCRecordReader implements RecordReader {
 
 Review comment:
   Add class level java docs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] akshayrai merged pull request #3996: [TE] Clean up the yaml editor calls and messages

2019-03-20 Thread GitBox
akshayrai merged pull request #3996: [TE] Clean up the yaml editor calls and 
messages
URL: https://github.com/apache/incubator-pinot/pull/3996
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: [TE] Clean up the yaml editor calls and messages (#3996)

2019-03-20 Thread akshayrai09
This is an automated email from the ASF dual-hosted git repository.

akshayrai09 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new d78a807  [TE] Clean up the yaml editor calls and messages (#3996)
d78a807 is described below

commit d78a8075d64801b258d5634bbcb82121e5f3506f
Author: Akshay Rai 
AuthorDate: Wed Mar 20 13:45:21 2019 -0700

[TE] Clean up the yaml editor calls and messages (#3996)
---
 .../app/pods/components/yaml-editor/component.js   | 114 ++---
 .../app/pods/components/yaml-editor/template.hbs   |  10 +-
 .../app/pods/manage/yaml/template.hbs  |   2 +-
 3 files changed, 63 insertions(+), 63 deletions(-)

diff --git 
a/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js 
b/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js
index 2e8537f..7f39068 100644
--- a/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js
+++ b/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js
@@ -6,17 +6,17 @@
  * @property {boolean} isEditMode - to activate the edit mode
  * @property {boolean} showSettings - to show the subscriber groups yaml editor
  * @property {Object} subscriptionGroupNames - the list of subscription groups
- * @property {Object} alertYaml - the alert yaml to display
- * @property {Object} detectionSettingsYaml - the subscription group yaml to 
display
+ * @property {Object} detectionYaml - the detection yaml to display
+ * @property {Object} subscriptionYaml - the subscription group yaml to display
  * @example
{{yaml-editor
  alertId=model.alertId
  subscriptionGroupId=model.subscriptionGroupId
  isEditMode=true
  showSettings=true
- subscriptionGroupNames=model.detectionSettingsYaml
- alertYaml=model.detectionYaml
- detectionSettingsYaml=model.detectionSettingsYaml
+ subscriptionGroupNames=model.subscriptionGroupNames
+ detectionYaml=model.detectionYaml
+ subscriptionYaml=model.subscriptionYaml
}}
  * @author lohuynh
  */
@@ -43,15 +43,15 @@ export default Component.extend({
*/
   currentMetric: null,
   isYamlParseable: true,
-  alertTitle: 'Define anomaly detection in YAML',
-  alertSettingsTitle: 'Define notification settings',
+  alertTitle: 'Define detection configuration',
+  alertSettingsTitle: 'Define subscription configuration',
   isEditMode: false,
   showSettings: true,
   disableYamlSave: true,
   detectionMsg: '',   //General alert failures
   subscriptionMsg: '',//General subscription failures
-  alertYaml: null,// The YAML for the anomaly alert detection
-  detectionSettingsYaml:  null,   // The YAML for the subscription group
+  detectionYaml: null,// The YAML for the anomaly detection
+  subscriptionYaml:  null,// The YAML for the subscription group
   yamlAlertProps: yamlAlertProps,
   yamlAlertSettings: yamlAlertSettings,
   showAnomalyModal: false,
@@ -66,13 +66,13 @@ export default Component.extend({
   init() {
 this._super(...arguments);
 if(get(this, 'isEditMode')) {
-  set(this, 'currentYamlAlertOriginal', get(this, 'alertYaml') || 
get(this, 'yamlAlertProps'));
-  set(this, 'currentYamlSettingsOriginal', get(this, 
'detectionSettingsYaml') || get(this, 'yamlAlertSettings'));
+  set(this, 'currentYamlAlertOriginal', get(this, 'detectionYaml') || 
get(this, 'yamlAlertProps'));
+  set(this, 'currentYamlSettingsOriginal', get(this, 'subscriptionYaml') 
|| get(this, 'yamlAlertSettings'));
 }
   },
 
   /**
-   * sets Yaml value displayed to contents of alertYaml or yamlAlertProps
+   * sets Yaml value displayed to contents of detectionYaml or yamlAlertProps
* @method currentYamlAlert
* @return {String}
*/
@@ -85,28 +85,28 @@ export default Component.extend({
   ),
 
   /**
-   * sets Yaml value displayed to contents of alertYaml or yamlAlertProps
+   * sets Yaml value displayed to contents of detectionYaml or yamlAlertProps
* @method currentYamlAlert
* @return {String}
*/
   currentYamlAlert: computed(
-'alertYaml',
+'detectionYaml',
 function() {
-  const inputYaml = get(this, 'alertYaml');
+  const inputYaml = get(this, 'detectionYaml');
   return inputYaml || get(this, 'yamlAlertProps');
 }
   ),
 
   /**
-   * sets Yaml value displayed to contents of detectionSettingsYaml or 
yamlAlertSettings
+   * sets Yaml value displayed to contents of subscriptionYaml or 
yamlAlertSettings
* @method currentYamlAlert
* @return {String}
*/
-  currentYamlSettings: computed(
-'detectionSettingsYaml',
+  currentSubscriptionYaml: computed(
+'subscriptionYaml',
 function() {
-  const detectionSettingsYaml = get(this, 'detectionSettingsYaml');
-  return detectionSettingsYaml || get(this, 'yamlAlertSettings');
+ 

[incubator-pinot] branch master updated: Fixing type casting issue for BYTES type values during realtime segment persistence (#3992)

2019-03-20 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 6eb8e79  Fixing type casting issue for BYTES type values during 
realtime segment persistence (#3992)
6eb8e79 is described below

commit 6eb8e7976499b43a5588e26c194e09d48ebca232
Author: Xiang Fu 
AuthorDate: Wed Mar 20 13:28:37 2019 -0700

Fixing type casting issue for BYTES type values during realtime segment 
persistence (#3992)
---
 .../converter/stats/RealtimeColumnStatistics.java| 20 ++--
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
index 714a0a9..860d569 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
@@ -22,6 +22,7 @@ import java.util.HashSet;
 import java.util.Set;
 import org.apache.pinot.common.config.ColumnPartitionConfig;
 import org.apache.pinot.common.data.FieldSpec;
+import org.apache.pinot.common.utils.primitive.ByteArray;
 import org.apache.pinot.core.common.Block;
 import org.apache.pinot.core.common.BlockMultiValIterator;
 import org.apache.pinot.core.data.partition.PartitionFunction;
@@ -150,17 +151,24 @@ public class RealtimeColumnStatistics implements 
ColumnStatistics {
 
 int docIdIndex = _sortedDocIdIterationOrder != null ? 
_sortedDocIdIterationOrder[0] : 0;
 int dictionaryId = singleValueReader.getInt(docIdIndex);
-Comparable previousValue = (Comparable) 
_dictionaryReader.get(dictionaryId);
+Object previousValue = _dictionaryReader.get(dictionaryId);
 for (int i = 1; i < blockLength; i++) {
   docIdIndex = _sortedDocIdIterationOrder != null ? 
_sortedDocIdIterationOrder[i] : i;
   dictionaryId = singleValueReader.getInt(docIdIndex);
-  Comparable currentValue = (Comparable) 
_dictionaryReader.get(dictionaryId);
+  Object currentValue = _dictionaryReader.get(dictionaryId);
   // If previousValue is greater than currentValue
-  if (0 < previousValue.compareTo(currentValue)) {
-return false;
-  } else {
-previousValue = currentValue;
+  switch (_block.getMetadata().getDataType().getStoredType()) {
+case BYTES:
+  if (0 < ByteArray.compare((byte[]) previousValue, (byte[]) 
currentValue)) {
+return false;
+  }
+  break;
+default:
+  if (0 < ((Comparable) previousValue).compareTo(currentValue)) {
+return false;
+  }
   }
+  previousValue = currentValue;
 }
 
 return true;


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] fx19880617 merged pull request #3992: Fixing type casting issue for BYTES type values during realtime segment persistence

2019-03-20 Thread GitBox
fx19880617 merged pull request #3992: Fixing type casting issue for BYTES type 
values during realtime segment persistence
URL: https://github.com/apache/incubator-pinot/pull/3992
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch make-delete-table-api-async deleted (was 27663ca)

2019-03-20 Thread jlli
This is an automated email from the ASF dual-hosted git repository.

jlli pushed a change to branch make-delete-table-api-async
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was 27663ca  Make delete table API async

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] Jackie-Jiang commented on a change in pull request #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
Jackie-Jiang commented on a change in pull request #3993: In TableConfig, add 
checks for mandatory fields
URL: https://github.com/apache/incubator-pinot/pull/3993#discussion_r267524256
 
 

 ##
 File path: 
pinot-common/src/main/java/org/apache/pinot/common/config/TableConfig.java
 ##
 @@ -118,14 +118,24 @@ public static TableConfig fromJsonString(String 
jsonString)
   @Nonnull
   public static TableConfig fromJSONConfig(@Nonnull JsonNode jsonConfig)
   throws IOException {
+// Mandatory fields
+Preconditions.checkState(jsonConfig.has(TABLE_TYPE_KEY), "Table type is 
missing");
 TableType tableType = 
TableType.valueOf(jsonConfig.get(TABLE_TYPE_KEY).asText().toUpperCase());
+Preconditions.checkState(jsonConfig.has(TABLE_NAME_KEY), "Table name is 
missing");
 String tableName = 
TableNameBuilder.forType(tableType).tableNameWithType(jsonConfig.get(TABLE_NAME_KEY).asText());
-
+Preconditions
+.checkState(jsonConfig.has(VALIDATION_CONFIG_KEY), "Mandatory config 
'%s' is missing", VALIDATION_CONFIG_KEY);
 SegmentsValidationAndRetentionConfig validationConfig =
 extractChildConfig(jsonConfig, VALIDATION_CONFIG_KEY, 
SegmentsValidationAndRetentionConfig.class);
+Preconditions.checkState(jsonConfig.has(TENANT_CONFIG_KEY), "Mandatory 
config '%s' is missing", TENANT_CONFIG_KEY);
 TenantConfig tenantConfig = extractChildConfig(jsonConfig, 
TENANT_CONFIG_KEY, TenantConfig.class);
+Preconditions
+.checkState(jsonConfig.has(INDEXING_CONFIG_KEY), "Mandatory config 
'%s' is missing", INDEXING_CONFIG_KEY);
 
 Review comment:
   Currently they are mandatory, and all the callers assume they are non-null.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch add-doc-experiment-with-pinot deleted (was dd11db8)

2019-03-20 Thread jlli
This is an automated email from the ASF dual-hosted git repository.

jlli pushed a change to branch add-doc-experiment-with-pinot
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was dd11db8  [TE] frontend - harleyjj/report-anomaly - adds back 
report-anomaly modal to alert overview (#3985)

This change permanently discards the following revisions:

 discard dd11db8  [TE] frontend - harleyjj/report-anomaly - adds back 
report-anomaly modal to alert overview (#3985)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch getting-started-doc deleted (was 7cc4401)

2019-03-20 Thread jlli
This is an automated email from the ASF dual-hosted git repository.

jlli pushed a change to branch getting-started-doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was 7cc4401  Add doc on experimenting with Pinot

This change permanently discards the following revisions:

 discard 7cc4401  Add doc on experimenting with Pinot


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch add-doc-for-experiment deleted (was f9fb9a5)

2019-03-20 Thread jlli
This is an automated email from the ASF dual-hosted git repository.

jlli pushed a change to branch add-doc-for-experiment
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was f9fb9a5  Add experiment section in getting started

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] codecov-io commented on issue #3992: Fixing type casting issue for BYTES type values during realtime segment persistence

2019-03-20 Thread GitBox
codecov-io commented on issue #3992: Fixing type casting issue for BYTES type 
values during realtime segment persistence
URL: https://github.com/apache/incubator-pinot/pull/3992#issuecomment-474996093
 
 
   # 
[Codecov](https://codecov.io/gh/apache/incubator-pinot/pull/3992?src=pr=h1) 
Report
   > Merging 
[#3992](https://codecov.io/gh/apache/incubator-pinot/pull/3992?src=pr=desc) 
into 
[master](https://codecov.io/gh/apache/incubator-pinot/commit/205ec5059cdf07dffc44355660642412bdbf3db5?src=pr=desc)
 will **increase** coverage by `<.01%`.
   > The diff coverage is `62.5%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/incubator-pinot/pull/3992/graphs/tree.svg?width=650=4ibza2ugkz=150=pr)](https://codecov.io/gh/apache/incubator-pinot/pull/3992?src=pr=tree)
   
   ```diff
   @@ Coverage Diff  @@
   ## master#3992  +/-   ##
   
   + Coverage 67.01%   67.01%   +<.01% 
 Complexity44  
   
 Files  1032 1032  
 Lines 5112451127   +3 
 Branches   7135 7137   +2 
   
   + Hits  3426034265   +5 
   - Misses1451614518   +2 
   + Partials   2348 2344   -4
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/incubator-pinot/pull/3992?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...time/converter/stats/RealtimeColumnStatistics.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9yZWFsdGltZS9jb252ZXJ0ZXIvc3RhdHMvUmVhbHRpbWVDb2x1bW5TdGF0aXN0aWNzLmphdmE=)
 | `47.22% <62.5%> (-2.06%)` | `0 <0> (ø)` | |
   | 
[...er/validation/BrokerResourceValidationManager.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci92YWxpZGF0aW9uL0Jyb2tlclJlc291cmNlVmFsaWRhdGlvbk1hbmFnZXIuamF2YQ==)
 | `50% <0%> (-31.25%)` | `0% <0%> (ø)` | |
   | 
[...egation/function/customobject/MinMaxRangePair.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9hZ2dyZWdhdGlvbi9mdW5jdGlvbi9jdXN0b21vYmplY3QvTWluTWF4UmFuZ2VQYWlyLmphdmE=)
 | `75.86% <0%> (-24.14%)` | `0% <0%> (ø)` | |
   | 
[...e/impl/dictionary/LongOnHeapMutableDictionary.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9yZWFsdGltZS9pbXBsL2RpY3Rpb25hcnkvTG9uZ09uSGVhcE11dGFibGVEaWN0aW9uYXJ5LmphdmE=)
 | `88.88% <0%> (-6.67%)` | `0% <0%> (ø)` | |
   | 
[.../impl/dictionary/LongOffHeapMutableDictionary.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9yZWFsdGltZS9pbXBsL2RpY3Rpb25hcnkvTG9uZ09mZkhlYXBNdXRhYmxlRGljdGlvbmFyeS5qYXZh)
 | `87.27% <0%> (-5.46%)` | `0% <0%> (ø)` | |
   | 
[...e/operator/dociditerators/BitmapDocIdIterator.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci9kb2NpZGl0ZXJhdG9ycy9CaXRtYXBEb2NJZEl0ZXJhdG9yLmphdmE=)
 | `60.71% <0%> (-3.58%)` | `0% <0%> (ø)` | |
   | 
[...not/broker/broker/helix/ClusterChangeMediator.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtYnJva2VyL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9icm9rZXIvYnJva2VyL2hlbGl4L0NsdXN0ZXJDaGFuZ2VNZWRpYXRvci5qYXZh)
 | `66.66% <0%> (-2.57%)` | `0% <0%> (ø)` | |
   | 
[...impl/dictionary/DoubleOnHeapMutableDictionary.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9yZWFsdGltZS9pbXBsL2RpY3Rpb25hcnkvRG91YmxlT25IZWFwTXV0YWJsZURpY3Rpb25hcnkuamF2YQ==)
 | `64.44% <0%> (-2.23%)` | `0% <0%> (ø)` | |
   | 
[...a/org/apache/pinot/core/common/DataBlockCache.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9jb21tb24vRGF0YUJsb2NrQ2FjaGUuamF2YQ==)
 | `78.62% <0%> (-1.53%)` | `0% <0%> (ø)` | |
   | 
[...g/apache/pinot/common/metrics/AbstractMetrics.java](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vbWV0cmljcy9BYnN0cmFjdE1ldHJpY3MuamF2YQ==)
 | `75.33% <0%> (-1.34%)` | `0% <0%> (ø)` | |
   | ... and [14 
more](https://codecov.io/gh/apache/incubator-pinot/pull/3992/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full report at 

[GitHub] [incubator-pinot] harleyjj commented on issue #3996: [TE] Clean up the yaml editor calls and messages

2019-03-20 Thread GitBox
harleyjj commented on issue #3996: [TE] Clean up the yaml editor calls and 
messages
URL: https://github.com/apache/incubator-pinot/pull/3996#issuecomment-474996148
 
 
   These names seem much more clear - thank you!  It seems there are some 
branch conflicts, though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] akshayrai merged pull request #3995: [TE] frontend - harleyjj/edit-alert - fix subscription group put bug

2019-03-20 Thread GitBox
akshayrai merged pull request #3995: [TE] frontend - harleyjj/edit-alert - fix 
subscription group put bug
URL: https://github.com/apache/incubator-pinot/pull/3995
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: [TE] frontend - harleyjj/edit-alert - fix subscription group put bug (#3995)

2019-03-20 Thread akshayrai09
This is an automated email from the ASF dual-hosted git repository.

akshayrai09 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new d8061f3  [TE] frontend - harleyjj/edit-alert - fix subscription group 
put bug (#3995)
d8061f3 is described below

commit d8061f3d0871e74c2cd960a3f205e12946c66219
Author: Harley Jackson 
AuthorDate: Wed Mar 20 12:31:38 2019 -0700

[TE] frontend - harleyjj/edit-alert - fix subscription group put bug (#3995)
---
 .../app/pods/components/yaml-editor/component.js | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git 
a/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js 
b/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js
index 1777b90..2e8537f 100644
--- a/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js
+++ b/thirdeye/thirdeye-frontend/app/pods/components/yaml-editor/component.js
@@ -303,7 +303,7 @@ export default Component.extend({
  */
 onYAMLGroupSelectionAction(value) {
   if(value.yaml) {
-set(this, 'currentYamlSettings', value.yaml);
+set(this, 'detectionSettingsYaml', value.yaml);
 set(this, 'groupName', value);
 set(this, 'subscriptionGroupId', value.id);
   }
@@ -420,11 +420,11 @@ export default Component.extend({
 async saveEditYamlAction() {
   const {
 alertYaml,
-currentYamlSettings,
+detectionSettingsYaml,
 notifications,
 alertId,
 subscriptionGroupId
-  } = getProperties(this, 'alertYaml', 'currentYamlSettings', 
'notifications', 'alertId', 'subscriptionGroupId');
+  } = getProperties(this, 'alertYaml', 'detectionSettingsYaml', 
'notifications', 'alertId', 'subscriptionGroupId');
   //PUT alert
   const alert_url = `/yaml/${alertId}`;
   const alertPostProps = {
@@ -445,12 +445,11 @@ export default Component.extend({
   } catch (error) {
 notifications.error('Save alert yaml file failed.', error);
   }
-
   //PUT settings
   const setting_url = `/yaml/subscription/${subscriptionGroupId}`;
   const settingsPostProps = {
 method: 'PUT',
-body: currentYamlSettings,
+body: detectionSettingsYaml,
 headers: { 'content-type': 'text/plain' }
   };
   try {


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] akshayrai opened a new pull request #3996: [TE] Clean up the yaml editor calls and messages

2019-03-20 Thread GitBox
akshayrai opened a new pull request #3996: [TE] Clean up the yaml editor calls 
and messages
URL: https://github.com/apache/incubator-pinot/pull/3996
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] codecov-io commented on issue #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
codecov-io commented on issue #3993: In TableConfig, add checks for mandatory 
fields
URL: https://github.com/apache/incubator-pinot/pull/3993#issuecomment-474986968
 
 
   # 
[Codecov](https://codecov.io/gh/apache/incubator-pinot/pull/3993?src=pr=h1) 
Report
   > Merging 
[#3993](https://codecov.io/gh/apache/incubator-pinot/pull/3993?src=pr=desc) 
into 
[master](https://codecov.io/gh/apache/incubator-pinot/commit/59fd4aab4480b6c62ba0bfe61f1a976d0ca31221?src=pr=desc)
 will **decrease** coverage by `6.5%`.
   > The diff coverage is `100%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/incubator-pinot/pull/3993/graphs/tree.svg?width=650=4ibza2ugkz=150=pr)](https://codecov.io/gh/apache/incubator-pinot/pull/3993?src=pr=tree)
   
   ```diff
   @@ Coverage Diff  @@
   ## master#3993  +/-   ##
   
   - Coverage 56.98%   50.48%   -6.51% 
 Complexity44  
   
 Files  1032 1155 +123 
 Lines 5112458088+6964 
 Branches   7135 8046 +911 
   
   + Hits  2913529325 +190 
   - Misses1983226597+6765 
   - Partials   2157 2166   +9
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/incubator-pinot/pull/3993?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...va/org/apache/pinot/common/config/TableConfig.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vY29uZmlnL1RhYmxlQ29uZmlnLmphdmE=)
 | `69.54% <100%> (+1.94%)` | `0 <0> (ø)` | :arrow_down: |
   | 
[...lix/EmptyBrokerOnlineOfflineStateModelFactory.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9oZWxpeC9FbXB0eUJyb2tlck9ubGluZU9mZmxpbmVTdGF0ZU1vZGVsRmFjdG9yeS5qYXZh)
 | `86.66% <0%> (-13.34%)` | `0% <0%> (ø)` | |
   | 
[.../startree/v2/builder/OffHeapSingleTreeBuilder.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9zdGFydHJlZS92Mi9idWlsZGVyL09mZkhlYXBTaW5nbGVUcmVlQnVpbGRlci5qYXZh)
 | `86.3% <0%> (-4.17%)` | `0% <0%> (ø)` | |
   | 
[...n/java/org/apache/pinot/tools/SegmentDumpTool.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtdG9vbHMvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3Rvb2xzL1NlZ21lbnREdW1wVG9vbC5qYXZh)
 | `0% <0%> (ø)` | `0% <0%> (?)` | |
   | 
[...segment/converter/PinotSegmentToJsonConverter.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtdG9vbHMvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3Rvb2xzL3NlZ21lbnQvY29udmVydGVyL1Bpbm90U2VnbWVudFRvSnNvbkNvbnZlcnRlci5qYXZh)
 | `0% <0%> (ø)` | `0% <0%> (?)` | |
   | 
[...ot/tools/query/comparison/SegmentInfoProvider.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtdG9vbHMvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3Rvb2xzL3F1ZXJ5L2NvbXBhcmlzb24vU2VnbWVudEluZm9Qcm92aWRlci5qYXZh)
 | `0% <0%> (ø)` | `0% <0%> (?)` | |
   | 
[...apache/pinot/tools/scan/query/AggregationFunc.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtdG9vbHMvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3Rvb2xzL3NjYW4vcXVlcnkvQWdncmVnYXRpb25GdW5jLmphdmE=)
 | `0% <0%> (ø)` | `0% <0%> (?)` | |
   | 
[...t/tools/config/validator/TableConfigValidator.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtdG9vbHMvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3Rvb2xzL2NvbmZpZy92YWxpZGF0b3IvVGFibGVDb25maWdWYWxpZGF0b3IuamF2YQ==)
 | `0% <0%> (ø)` | `0% <0%> (?)` | |
   | 
[...s/admin/command/BackfillDateTimeColumnCommand.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtdG9vbHMvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3Rvb2xzL2FkbWluL2NvbW1hbmQvQmFja2ZpbGxEYXRlVGltZUNvbHVtbkNvbW1hbmQuamF2YQ==)
 | `0% <0%> (ø)` | `0% <0%> (?)` | |
   | 
[...g/apache/pinot/tools/perf/PerfBenchmarkRunner.java](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree#diff-cGlub3QtdG9vbHMvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL3Bpbm90L3Rvb2xzL3BlcmYvUGVyZkJlbmNobWFya1J1bm5lci5qYXZh)
 | `0% <0%> (ø)` | `0% <0%> (?)` | |
   | ... and [121 
more](https://codecov.io/gh/apache/incubator-pinot/pull/3993/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/incubator-pinot/pull/3993?src=pr=continue).
   > **Legend** - [Click here to learn 
more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute  (impact)`, `ø = not affected`, `? = missing data`
   > Powered by 

[GitHub] [incubator-pinot] harleyjj opened a new pull request #3995: [TE] frontend - harleyjj/edit-alert - fix subscription group put bug

2019-03-20 Thread GitBox
harleyjj opened a new pull request #3995: [TE] frontend - harleyjj/edit-alert - 
fix subscription group put bug
URL: https://github.com/apache/incubator-pinot/pull/3995
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] kishoreg commented on a change in pull request #3992: Fixing type casting issue for BYTES type values during realtime segment persistence

2019-03-20 Thread GitBox
kishoreg commented on a change in pull request #3992: Fixing type casting issue 
for BYTES type values during realtime segment persistence
URL: https://github.com/apache/incubator-pinot/pull/3992#discussion_r267490711
 
 

 ##
 File path: 
pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
 ##
 @@ -139,6 +140,11 @@ public boolean isSorted() {
   return false;
 }
 
+// Values in BYTES are not comparable
+if(_block.getMetadata().getDataType().getStoredType() == DataType.BYTES) {
 
 Review comment:
   +1 on using this wrapper comparable. Can we have byte[] column that is 
sortable?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] xiangfu1 commented on a change in pull request #3992: Fixing type casting issue for BYTES type values during realtime segment persistence

2019-03-20 Thread GitBox
xiangfu1 commented on a change in pull request #3992: Fixing type casting issue 
for BYTES type values during realtime segment persistence
URL: https://github.com/apache/incubator-pinot/pull/3992#discussion_r267483867
 
 

 ##
 File path: 
pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
 ##
 @@ -139,6 +140,11 @@ public boolean isSorted() {
   return false;
 }
 
+// Values in BYTES are not comparable
+if(_block.getMetadata().getDataType().getStoredType() == DataType.BYTES) {
 
 Review comment:
   Updated with ByteArray comparison.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] codecov-io edited a comment on issue #3989: Add experiment section in getting started doc

2019-03-20 Thread GitBox
codecov-io edited a comment on issue #3989: Add experiment section in getting 
started doc
URL: https://github.com/apache/incubator-pinot/pull/3989#issuecomment-474632607
 
 
   # 
[Codecov](https://codecov.io/gh/apache/incubator-pinot/pull/3989?src=pr=h1) 
Report
   > Merging 
[#3989](https://codecov.io/gh/apache/incubator-pinot/pull/3989?src=pr=desc) 
into 
[master](https://codecov.io/gh/apache/incubator-pinot/commit/59fd4aab4480b6c62ba0bfe61f1a976d0ca31221?src=pr=desc)
 will **increase** coverage by `10.19%`.
   > The diff coverage is `n/a`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/incubator-pinot/pull/3989/graphs/tree.svg?width=650=4ibza2ugkz=150=pr)](https://codecov.io/gh/apache/incubator-pinot/pull/3989?src=pr=tree)
   
   ```diff
   @@  Coverage Diff  @@
   ## master#3989   +/-   ##
   =
   + Coverage 56.98%   67.18%   +10.19% 
 Complexity44   
   =
 Files  1032 1032   
 Lines 5112451124   
 Branches   7135 7135   
   =
   + Hits  2913534348 +5213 
   + Misses1983214410 -5422 
   - Partials   2157 2366  +209
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/incubator-pinot/pull/3989?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...va/org/apache/pinot/common/data/TimeFieldSpec.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vZGF0YS9UaW1lRmllbGRTcGVjLmphdmE=)
 | `92.59% <0%> (-1.24%)` | `0% <0%> (ø)` | |
   | 
[...regation/function/customobject/QuantileDigest.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9xdWVyeS9hZ2dyZWdhdGlvbi9mdW5jdGlvbi9jdXN0b21vYmplY3QvUXVhbnRpbGVEaWdlc3QuamF2YQ==)
 | `57.74% <0%> (-0.45%)` | `0% <0%> (ø)` | |
   | 
[...ator/transform/function/BaseTransformFunction.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci90cmFuc2Zvcm0vZnVuY3Rpb24vQmFzZVRyYW5zZm9ybUZ1bmN0aW9uLmphdmE=)
 | `29.95% <0%> (+0.42%)` | `0% <0%> (ø)` | :arrow_down: |
   | 
[...g/apache/pinot/common/utils/helix/HelixHelper.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvaGVsaXgvSGVsaXhIZWxwZXIuamF2YQ==)
 | `56.25% <0%> (+0.56%)` | `0% <0%> (ø)` | :arrow_down: |
   | 
[...ment/creator/impl/SegmentColumnarIndexCreator.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9zZWdtZW50L2NyZWF0b3IvaW1wbC9TZWdtZW50Q29sdW1uYXJJbmRleENyZWF0b3IuamF2YQ==)
 | `87.45% <0%> (+0.76%)` | `0% <0%> (ø)` | :arrow_down: |
   | 
[...a/org/apache/pinot/common/utils/ServiceStatus.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9waW5vdC9jb21tb24vdXRpbHMvU2VydmljZVN0YXR1cy5qYXZh)
 | `71.2% <0%> (+0.79%)` | `0% <0%> (ø)` | :arrow_down: |
   | 
[...r/transform/function/ValueInTransformFunction.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9vcGVyYXRvci90cmFuc2Zvcm0vZnVuY3Rpb24vVmFsdWVJblRyYW5zZm9ybUZ1bmN0aW9uLmphdmE=)
 | `39.2% <0%> (+0.8%)` | `0% <0%> (ø)` | :arrow_down: |
   | 
[.../pinot/core/segment/index/SegmentMetadataImpl.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9zZWdtZW50L2luZGV4L1NlZ21lbnRNZXRhZGF0YUltcGwuamF2YQ==)
 | `82% <0%> (+0.83%)` | `0% <0%> (ø)` | :arrow_down: |
   | 
[...e/io/writer/impl/MutableOffHeapByteArrayStore.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29yZS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29yZS9pby93cml0ZXIvaW1wbC9NdXRhYmxlT2ZmSGVhcEJ5dGVBcnJheVN0b3JlLmphdmE=)
 | `86.45% <0%> (+1.04%)` | `0% <0%> (ø)` | :arrow_down: |
   | 
[.../helix/core/realtime/SegmentCompletionManager.java](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree#diff-cGlub3QtY29udHJvbGxlci9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvcGlub3QvY29udHJvbGxlci9oZWxpeC9jb3JlL3JlYWx0aW1lL1NlZ21lbnRDb21wbGV0aW9uTWFuYWdlci5qYXZh)
 | `69.51% <0%> (+1.09%)` | `0% <0%> (ø)` | :arrow_down: |
   | ... and [330 
more](https://codecov.io/gh/apache/incubator-pinot/pull/3989/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full report at 

[GitHub] [incubator-pinot] xiangfu1 commented on a change in pull request #3992: Fixing type casting issue for BYTES type values during realtime segment persistence

2019-03-20 Thread GitBox
xiangfu1 commented on a change in pull request #3992: Fixing type casting issue 
for BYTES type values during realtime segment persistence
URL: https://github.com/apache/incubator-pinot/pull/3992#discussion_r267483867
 
 

 ##
 File path: 
pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
 ##
 @@ -139,6 +140,11 @@ public boolean isSorted() {
   return false;
 }
 
+// Values in BYTES are not comparable
+if(_block.getMetadata().getDataType().getStoredType() == DataType.BYTES) {
 
 Review comment:
   Update with ByteArray comparison.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] jenniferdai opened a new pull request #3994: Adding ORC reader

2019-03-20 Thread GitBox
jenniferdai opened a new pull request #3994: Adding ORC reader
URL: https://github.com/apache/incubator-pinot/pull/3994
 
 
   * Performance generating the same segment with ORC and Avro
   ORC: 3 mins 41 seconds
   Avro: 3 mins 35 seconds
   * Will check accuracy, blocked on someone else to give me the segment, will 
update
   * Next steps - add multivalue columns and test/add byte arrays
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated (63336cc -> ea75c00)

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a change to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from 63336cc  Fixing maven enforcer dependency issuse
 new d604449  Revert "Fixing maven enforcer dependency issuse"
 new ea75c00  Revert "Copying orc reader to pinot core for now so I don't 
have to edit the pinot script internally to publish jars" - giving up because 
so many dependency problems"

The 5853 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../java/org/apache/pinot/common/data/Schema.java  |   2 +-
 pinot-core/pom.xml |  24 ---
 .../pinot/orc/data/readers/ORCRecordReader.java| 186 -
 3 files changed, 1 insertion(+), 211 deletions(-)
 delete mode 100644 
pinot-core/src/main/java/org/apache/pinot/orc/data/readers/ORCRecordReader.java


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3993: In TableConfig, add 
checks for mandatory fields
URL: https://github.com/apache/incubator-pinot/pull/3993#discussion_r267479988
 
 

 ##
 File path: 
pinot-common/src/main/java/org/apache/pinot/common/config/TableConfig.java
 ##
 @@ -118,14 +118,24 @@ public static TableConfig fromJsonString(String 
jsonString)
   @Nonnull
   public static TableConfig fromJSONConfig(@Nonnull JsonNode jsonConfig)
   throws IOException {
+// Mandatory fields
+Preconditions.checkState(jsonConfig.has(TABLE_TYPE_KEY), "Table type is 
missing");
 TableType tableType = 
TableType.valueOf(jsonConfig.get(TABLE_TYPE_KEY).asText().toUpperCase());
+Preconditions.checkState(jsonConfig.has(TABLE_NAME_KEY), "Table name is 
missing");
 String tableName = 
TableNameBuilder.forType(tableType).tableNameWithType(jsonConfig.get(TABLE_NAME_KEY).asText());
-
+Preconditions
+.checkState(jsonConfig.has(VALIDATION_CONFIG_KEY), "Mandatory config 
'%s' is missing", VALIDATION_CONFIG_KEY);
 SegmentsValidationAndRetentionConfig validationConfig =
 extractChildConfig(jsonConfig, VALIDATION_CONFIG_KEY, 
SegmentsValidationAndRetentionConfig.class);
+Preconditions.checkState(jsonConfig.has(TENANT_CONFIG_KEY), "Mandatory 
config '%s' is missing", TENANT_CONFIG_KEY);
 TenantConfig tenantConfig = extractChildConfig(jsonConfig, 
TENANT_CONFIG_KEY, TenantConfig.class);
+Preconditions
+.checkState(jsonConfig.has(INDEXING_CONFIG_KEY), "Mandatory config 
'%s' is missing", INDEXING_CONFIG_KEY);
 
 Review comment:
   Are INDEXING and CUSTOM configs really mandatory?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] mayankshriv commented on a change in pull request #3992: Fixing type casting issue for BYTES type values during realtime segment persistence

2019-03-20 Thread GitBox
mayankshriv commented on a change in pull request #3992: Fixing type casting 
issue for BYTES type values during realtime segment persistence
URL: https://github.com/apache/incubator-pinot/pull/3992#discussion_r267478728
 
 

 ##
 File path: 
pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
 ##
 @@ -139,6 +140,11 @@ public boolean isSorted() {
   return false;
 }
 
+// Values in BYTES are not comparable
+if(_block.getMetadata().getDataType().getStoredType() == DataType.BYTES) {
 
 Review comment:
   There's a wrapper around byte[] that is comparable: 
org.apache.pinot.common.utils.primitiveorg.apache.pinot.common.utils.primitive.ByteArray


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: Update managing pinot doc (#3991)

2019-03-20 Thread sunithabeeram
This is an automated email from the ASF dual-hosted git repository.

sunithabeeram pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 205ec50  Update managing pinot doc (#3991)
205ec50 is described below

commit 205ec5059cdf07dffc44355660642412bdbf3db5
Author: Jialiang Li 
AuthorDate: Wed Mar 20 11:09:24 2019 -0700

Update managing pinot doc (#3991)
---
 docs/management_api.rst | 21 ++---
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/docs/management_api.rst b/docs/management_api.rst
index e3f3c9d..9468bd1 100644
--- a/docs/management_api.rst
+++ b/docs/management_api.rst
@@ -26,7 +26,7 @@ Pinot Management Console
 
 
 There is a REST API which allows management of tables, tenants, segments and 
schemas. It can be accessed by going to
-``http://[controller host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API. The below
+``http://[controller_host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API. The below
 is the screenshot of the console.
 
   .. figure:: img/pinot-console.png
@@ -43,17 +43,16 @@ To rebalance segments of a table across servers:
 pinot-admin.sh
 --
 
-It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants. The script
-can be generated by running ``mvn install package -DskipTests -Pbin-dist`` in 
the directory
-in which you checked out Pinot.
+``pinot-admin.sh`` is another way of managing Pinot cluster. This script can 
be generated by running
+``mvn install package -DskipTests -Pbin-dist`` in the directory in which you 
checked out Pinot.
 
 For example, to create a pinot segment:
 
 .. code-block:: none
 
-  $ 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/bin/pinot-admin.sh
 CreateSegment -dataDir /Users/jlli/Desktop/test/ -format CSV -outDir 
/Users/jlli/Desktop/test2/ -tableName baseballStats -segmentName 
baseballStats_data -overwrite -schemaFile 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json
-  Executing command: CreateSegment  -generatorConfigFile null -dataDir 
/Users/jlli/Desktop/test/ -format CSV -outDir /Users/jlli/Desktop/test2/ 
-overwrite true -tableName baseballStats -segmentName baseballStats_data 
-timeColumnName null -schemaFile 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json
 -readerConfigFile null -enableStarTreeIndex false -starTreeIndexSpecFile null 
-hllSize 9 - [...]
-  Accepted files: [/Users/jlli/Desktop/test/baseballStats_data.csv]
+  $ 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/bin/pinot-admin.sh
 CreateSegment -dataDir /Users/host1/Desktop/test/ -format CSV -outDir 
/Users/host1/Desktop/test2/ -tableName baseballStats -segmentName 
baseballStats_data -overwrite -schemaFile 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json
+  Executing command: CreateSegment  -generatorConfigFile null -dataDir 
/Users/host1/Desktop/test/ -format CSV -outDir /Users/host1/Desktop/test2/ 
-overwrite true -tableName baseballStats -segmentName baseballStats_data 
-timeColumnName null -schemaFile 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json
 -readerConfigFile null -enableStarTreeIndex false -starTreeIndexSpecFile null 
-hllSize 9 [...]
+  Accepted files: [/Users/host1/Desktop/test/baseballStats_data.csv]
   Finished building StatsCollector!
   Collected stats for 97889 documents
   Created dictionary for INT column: homeRuns with cardinality: 67, range: 0 
to 73
@@ -84,9 +83,9 @@ For example, to create a pinot segment:
   Start building IndexCreator!
   Finished records indexing in IndexCreator!
   Finished segment seal!
-  Converting segment: /Users/jlli/Desktop/test2/baseballStats_data_0 to v3 
format
-  v3 segment location for segment: baseballStats_data_0 is 
/Users/jlli/Desktop/test2/baseballStats_data_0/v3
-  Deleting files in v1 segment directory: 
/Users/jlli/Desktop/test2/baseballStats_data_0
+  Converting segment: /Users/host1/Desktop/test2/baseballStats_data_0 to v3 
format
+  v3 segment location for segment: baseballStats_data_0 is 
/Users/host1/Desktop/test2/baseballStats_data_0/v3
+  Deleting files in v1 segment directory: 
/Users/host1/Desktop/test2/baseballStats_data_0
   Driver, record read time : 369
   Driver, stats collector time : 0
   Driver, indexing time : 373
@@ -95,5 +94,5 @@ To query a 

[incubator-pinot] branch management-api-doc deleted (was a340bdf)

2019-03-20 Thread sunithabeeram
This is an automated email from the ASF dual-hosted git repository.

sunithabeeram pushed a change to branch management-api-doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was a340bdf  Update managing pinot doc

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram merged pull request #3991: Update managing pinot documentation

2019-03-20 Thread GitBox
sunithabeeram merged pull request #3991: Update managing pinot documentation
URL: https://github.com/apache/incubator-pinot/pull/3991
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram merged pull request #3989: Add experiment section in getting started doc

2019-03-20 Thread GitBox
sunithabeeram merged pull request #3989: Add experiment section in getting 
started doc
URL: https://github.com/apache/incubator-pinot/pull/3989
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] Jackie-Jiang opened a new pull request #3993: In TableConfig, add checks for mandatory fields

2019-03-20 Thread GitBox
Jackie-Jiang opened a new pull request #3993: In TableConfig, add checks for 
mandatory fields
URL: https://github.com/apache/incubator-pinot/pull/3993
 
 
   Add explicit checks for mandatory fields
   Without the explicit checks, it will throw NPE, which is not clear and hard 
to debug


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch table_config_mandatory_fields created (now a409684)

2019-03-20 Thread jackie
This is an automated email from the ASF dual-hosted git repository.

jackie pushed a change to branch table_config_mandatory_fields
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at a409684  In TableConfig, add checks for mandatory fields

This branch includes the following new commits:

 new a409684  In TableConfig, add checks for mandatory fields

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: In TableConfig, add checks for mandatory fields

2019-03-20 Thread jackie
This is an automated email from the ASF dual-hosted git repository.

jackie pushed a commit to branch table_config_mandatory_fields
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit a409684e5eacfe71b7293b13ca85c4d89df58a07
Author: Jackie (Xiaotian) Jiang 
AuthorDate: Wed Mar 20 11:01:20 2019 -0700

In TableConfig, add checks for mandatory fields

Add explicit checks for mandatory fields
Without the explicit checks, it will throw NPE, which is not clear and hard 
to debug
---
 .../apache/pinot/common/config/TableConfig.java|  32 ++-
 .../pinot/common/config/TableConfigTest.java   | 300 +
 2 files changed, 216 insertions(+), 116 deletions(-)

diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/config/TableConfig.java 
b/pinot-common/src/main/java/org/apache/pinot/common/config/TableConfig.java
index 3779d2c..09856d4 100644
--- a/pinot-common/src/main/java/org/apache/pinot/common/config/TableConfig.java
+++ b/pinot-common/src/main/java/org/apache/pinot/common/config/TableConfig.java
@@ -118,14 +118,24 @@ public class TableConfig {
   @Nonnull
   public static TableConfig fromJSONConfig(@Nonnull JsonNode jsonConfig)
   throws IOException {
+// Mandatory fields
+Preconditions.checkState(jsonConfig.has(TABLE_TYPE_KEY), "Table type is 
missing");
 TableType tableType = 
TableType.valueOf(jsonConfig.get(TABLE_TYPE_KEY).asText().toUpperCase());
+Preconditions.checkState(jsonConfig.has(TABLE_NAME_KEY), "Table name is 
missing");
 String tableName = 
TableNameBuilder.forType(tableType).tableNameWithType(jsonConfig.get(TABLE_NAME_KEY).asText());
-
+Preconditions
+.checkState(jsonConfig.has(VALIDATION_CONFIG_KEY), "Mandatory config 
'%s' is missing", VALIDATION_CONFIG_KEY);
 SegmentsValidationAndRetentionConfig validationConfig =
 extractChildConfig(jsonConfig, VALIDATION_CONFIG_KEY, 
SegmentsValidationAndRetentionConfig.class);
+Preconditions.checkState(jsonConfig.has(TENANT_CONFIG_KEY), "Mandatory 
config '%s' is missing", TENANT_CONFIG_KEY);
 TenantConfig tenantConfig = extractChildConfig(jsonConfig, 
TENANT_CONFIG_KEY, TenantConfig.class);
+Preconditions
+.checkState(jsonConfig.has(INDEXING_CONFIG_KEY), "Mandatory config 
'%s' is missing", INDEXING_CONFIG_KEY);
 IndexingConfig indexingConfig = extractChildConfig(jsonConfig, 
INDEXING_CONFIG_KEY, IndexingConfig.class);
+Preconditions.checkState(jsonConfig.has(CUSTOM_CONFIG_KEY), "Mandatory 
config '%s' is missing", CUSTOM_CONFIG_KEY);
 TableCustomConfig customConfig = extractChildConfig(jsonConfig, 
CUSTOM_CONFIG_KEY, TableCustomConfig.class);
+
+// Optional fields
 QuotaConfig quotaConfig = null;
 if (jsonConfig.has(QUOTA_CONFIG_KEY)) {
   quotaConfig = extractChildConfig(jsonConfig, QUOTA_CONFIG_KEY, 
QuotaConfig.class);
@@ -184,15 +194,29 @@ public class TableConfig {
   public static TableConfig fromZnRecord(@Nonnull ZNRecord znRecord)
   throws IOException {
 Map simpleFields = znRecord.getSimpleFields();
+
+// Mandatory fields
+Preconditions.checkState(simpleFields.containsKey(TABLE_TYPE_KEY), "Table 
type is missing");
 TableType tableType = 
TableType.valueOf(simpleFields.get(TABLE_TYPE_KEY).toUpperCase());
+Preconditions.checkState(simpleFields.containsKey(TABLE_NAME_KEY), "Table 
name is missing");
 String tableName = 
TableNameBuilder.forType(tableType).tableNameWithType(simpleFields.get(TABLE_NAME_KEY));
+Preconditions.checkState(simpleFields.containsKey(VALIDATION_CONFIG_KEY), 
"Mandatory config '%s' is missing",
+VALIDATION_CONFIG_KEY);
 SegmentsValidationAndRetentionConfig validationConfig =
 JsonUtils.stringToObject(simpleFields.get(VALIDATION_CONFIG_KEY), 
SegmentsValidationAndRetentionConfig.class);
+Preconditions
+.checkState(simpleFields.containsKey(TENANT_CONFIG_KEY), "Mandatory 
config '%s' is missing", TENANT_CONFIG_KEY);
 TenantConfig tenantConfig = 
JsonUtils.stringToObject(simpleFields.get(TENANT_CONFIG_KEY), 
TenantConfig.class);
+Preconditions.checkState(simpleFields.containsKey(INDEXING_CONFIG_KEY), 
"Mandatory config '%s' is missing",
+INDEXING_CONFIG_KEY);
 IndexingConfig indexingConfig =
 JsonUtils.stringToObject(simpleFields.get(INDEXING_CONFIG_KEY), 
IndexingConfig.class);
+Preconditions
+.checkState(simpleFields.containsKey(CUSTOM_CONFIG_KEY), "Mandatory 
config '%s' is missing", CUSTOM_CONFIG_KEY);
 TableCustomConfig customConfig =
 JsonUtils.stringToObject(simpleFields.get(CUSTOM_CONFIG_KEY), 
TableCustomConfig.class);
+
+// Optional fields
 QuotaConfig quotaConfig = null;
 String quotaConfigString = simpleFields.get(QUOTA_CONFIG_KEY);
 if (quotaConfigString != null) {
@@ -204,9 +228,8 @@ public class TableConfig {
 if (taskConfigString != null) {
   taskConfig = JsonUtils.stringToObject(taskConfigString, 

[incubator-pinot] branch orc updated: Fixing maven enforcer dependency issuse

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a commit to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/orc by this push:
 new 63336cc  Fixing maven enforcer dependency issuse
63336cc is described below

commit 63336ccef5dcaa23b6fada57f70d8c2b55f234c4
Author: Jennifer Dai 
AuthorDate: Wed Mar 20 10:50:51 2019 -0700

Fixing maven enforcer dependency issuse
---
 pinot-core/pom.xml | 16 
 1 file changed, 16 insertions(+)

diff --git a/pinot-core/pom.xml b/pinot-core/pom.xml
index 82a1fb3..c3f02cf 100644
--- a/pinot-core/pom.xml
+++ b/pinot-core/pom.xml
@@ -216,10 +216,26 @@
 
   org.apache.orc
   orc-core
+  
+
+  org.apache.hadoop
+  hadoop-annotations
+
+  
 
 
   org.apache.orc
   orc-mapreduce
+  
+
+  org.apache.hadoop
+  hadoop-annotations
+
+
+  org.apache.hadoop
+  hadoop-yarn-common
+
+  
 
   
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] fx19880617 opened a new pull request #3992: Fixing type casting issue for BYTES type values during realtime segment persistence

2019-03-20 Thread GitBox
fx19880617 opened a new pull request #3992: Fixing type casting issue for BYTES 
type values during realtime segment persistence
URL: https://github.com/apache/incubator-pinot/pull/3992
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] jackjlli commented on a change in pull request #3989: Add experiment section in getting started doc

2019-03-20 Thread GitBox
jackjlli commented on a change in pull request #3989: Add experiment section in 
getting started doc
URL: https://github.com/apache/incubator-pinot/pull/3989#discussion_r267461373
 
 

 ##
 File path: docs/getting_started.rst
 ##
 @@ -94,3 +94,159 @@ show up in Pinot.
 To show new events appearing, one can run :sql:`SELECT * FROM meetupRsvp ORDER 
BY mtime DESC LIMIT 50` repeatedly, which shows the
 last events that were ingested by Pinot.
 
+Experimenting with Pinot
+
+
+Now we have a quick start Pinot cluster running locally. The below shows a 
step-by-step instruction on
+how to add a simple table to the Pinot system, how to upload segments, and how 
to query it.
+
+Suppose we have a transcript in CSV format containing students' basic info and 
their scores of each subject.
+
++++---+---+---+---+
+| studentID  | firstName  | lastName  |   gender  |  subject  |   score   |
++++===+===+===+===+
+| 200| Lucy   |   Smith   |   Female  |   Maths   |3.8|
++++---+---+---+---+
+| 200| Lucy   |   Smith   |   Female  |  English  |3.5|
++++---+---+---+---+
+| 201| Bob|King   |Male   |   Maths   |3.2|
++++---+---+---+---+
+| 202| Nick   |   Young   |Male   |  Physics  |3.6|
++++---+---+---+---+
+
+Firstly in order to set up a table, we need to specify the schema of this 
transcript.
+
+.. code-block:: none
+
+  {
+"schemaName": "transcript",
+"dimensionFieldSpecs": [
+  {
+"name": "studentID",
+"dataType": "STRING"
+  },
+  {
+"name": "firstName",
+"dataType": "STRING"
+  },
+  {
+"name": "lastName",
+"dataType": "STRING"
+  },
+  {
+"name": "gender",
+"dataType": "STRING"
+  },
+  {
+"name": "subject",
+"dataType": "STRING"
+  }
+],
+"metricFieldSpecs": [
+  {
+"name": "score",
+"dataType": "FLOAT"
+  }
+]
+  }
+
+To upload the schema, we can use the command below:
+
+.. code-block:: none
+
+  $ 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/bin/pinot-admin.sh
 AddSchema -schemaFile /Users/jlli/transcript-schema.json -exec
+  Executing command: AddSchema -controllerHost 172.25.119.20 -controllerPort 
9000 -schemaFilePath /Users/jlli/transcript-schema.json -exec
 
 Review comment:
   Addressed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] jackjlli commented on a change in pull request #3986: Modify documentation in managing Pinot page

2019-03-20 Thread GitBox
jackjlli commented on a change in pull request #3986: Modify documentation in 
managing Pinot page
URL: https://github.com/apache/incubator-pinot/pull/3986#discussion_r267458317
 
 

 ##
 File path: docs/management_api.rst
 ##
 @@ -26,26 +26,74 @@ Pinot Management Console
 
 
 There is a REST API which allows management of tables, tenants, segments and 
schemas. It can be accessed by going to
-``http://[controller host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API.
+``http://[controller host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API. The below
+is the screenshot of the console.
 
-For example, list all the schema within Pinot cluster:
+  .. figure:: img/pinot-console.png
+
+For example, to list all the schemas within Pinot cluster:
 
   .. figure:: img/list-schemas.png
 
-Upload a pinot segment:
+To rebalance segments of a table across servers:
 
-  .. figure:: img/upload-segment.png
+  .. figure:: img/rebalance-table.png
 
 
-Pinot-admin.sh
+pinot-admin.sh
 --
 
-It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants.
+It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants. The script
 
 Review comment:
   Addressed in https://github.com/apache/incubator-pinot/pull/3991.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] jackjlli opened a new pull request #3991: Update managing pinot documentation

2019-03-20 Thread GitBox
jackjlli opened a new pull request #3991: Update managing pinot documentation
URL: https://github.com/apache/incubator-pinot/pull/3991
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch management-api-doc updated (35adb1f -> a340bdf)

2019-03-20 Thread jlli
This is an automated email from the ASF dual-hosted git repository.

jlli pushed a change to branch management-api-doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 35adb1f  Update managing pinot doc
 new a340bdf  Update managing pinot doc

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (35adb1f)
\
 N -- N -- N   refs/heads/management-api-doc (a340bdf)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5859 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/management_api.rst | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Update managing pinot doc

2019-03-20 Thread jlli
This is an automated email from the ASF dual-hosted git repository.

jlli pushed a commit to branch management-api-doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 35adb1fe511731883a780d8070c9865abf9b6535
Author: jackjlli 
AuthorDate: Wed Mar 20 10:20:19 2019 -0700

Update managing pinot doc
---
 docs/management_api.rst | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/docs/management_api.rst b/docs/management_api.rst
index e3f3c9d..1752b26 100644
--- a/docs/management_api.rst
+++ b/docs/management_api.rst
@@ -43,9 +43,8 @@ To rebalance segments of a table across servers:
 pinot-admin.sh
 --
 
-It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants. The script
-can be generated by running ``mvn install package -DskipTests -Pbin-dist`` in 
the directory
-in which you checked out Pinot.
+``pinot-admin.sh`` is another way of managing Pinot cluster. This script can 
be generated by running
+``mvn install package -DskipTests -Pbin-dist`` in the directory in which you 
checked out Pinot.
 
 For example, to create a pinot segment:
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch management-api-doc created (now 35adb1f)

2019-03-20 Thread jlli
This is an automated email from the ASF dual-hosted git repository.

jlli pushed a change to branch management-api-doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at 35adb1f  Update managing pinot doc

This branch includes the following new commits:

 new 35adb1f  Update managing pinot doc

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated (8b60f6f -> e51533f)

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a change to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 8b60f6f  Adding orc reader
 new e51533f  Adding orc reader

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (8b60f6f)
\
 N -- N -- N   refs/heads/orc (e51533f)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5849 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pinot-common/src/main/java/org/apache/pinot/common/data/Schema.java | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] mcvsubbu commented on issue #3978: The segments are stored in memory

2019-03-20 Thread GitBox
mcvsubbu commented on issue #3978: The segments are stored in memory
URL: 
https://github.com/apache/incubator-pinot/issues/3978#issuecomment-474932084
 
 
   You can read up on MMAP here : 
https://en.wikipedia.org/wiki/Memory-mapped_file
   A segment is MMAPed as long as the server is alive and hosts the segment. If 
the server is restarted, it is mmaped again.
   
   If you want to change the load mode of segments, you can update the table 
config with the new load mode (MMAP), and use the controller console to reload 
the segment (or all segments of a table). If you do this, the direct-allocated 
memory will be released and the segment will now be loaded in MMAP mode.
   
   Hope this helps.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated (2f03bf8 -> 8b60f6f)

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a change to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 2f03bf8  Adding orc reader
 new 8b60f6f  Adding orc reader

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (2f03bf8)
\
 N -- N -- N   refs/heads/orc (8b60f6f)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5849 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pinot-distribution/pom.xml | 4 
 pom.xml| 5 +
 2 files changed, 9 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch orc updated (eb5e528 -> 2f03bf8)

2019-03-20 Thread jenniferdai
This is an automated email from the ASF dual-hosted git repository.

jenniferdai pushed a change to branch orc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard eb5e528  Adding orc reader
 new 2f03bf8  Adding orc reader

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (eb5e528)
\
 N -- N -- N   refs/heads/orc (2f03bf8)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5849 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../pinot/orc/data/readers/ORCRecordReader.java| 77 +++---
 1 file changed, 55 insertions(+), 22 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3989: Add experiment section in getting started doc

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3989: Add experiment 
section in getting started doc
URL: https://github.com/apache/incubator-pinot/pull/3989#discussion_r267346057
 
 

 ##
 File path: docs/getting_started.rst
 ##
 @@ -94,3 +94,159 @@ show up in Pinot.
 To show new events appearing, one can run :sql:`SELECT * FROM meetupRsvp ORDER 
BY mtime DESC LIMIT 50` repeatedly, which shows the
 last events that were ingested by Pinot.
 
+Experimenting with Pinot
+
+
+Now we have a quick start Pinot cluster running locally. The below shows a 
step-by-step instruction on
+how to add a simple table to the Pinot system, how to upload segments, and how 
to query it.
+
+Suppose we have a transcript in CSV format containing students' basic info and 
their scores of each subject.
+
++++---+---+---+---+
+| studentID  | firstName  | lastName  |   gender  |  subject  |   score   |
++++===+===+===+===+
+| 200| Lucy   |   Smith   |   Female  |   Maths   |3.8|
++++---+---+---+---+
+| 200| Lucy   |   Smith   |   Female  |  English  |3.5|
++++---+---+---+---+
+| 201| Bob|King   |Male   |   Maths   |3.2|
++++---+---+---+---+
+| 202| Nick   |   Young   |Male   |  Physics  |3.6|
++++---+---+---+---+
+
+Firstly in order to set up a table, we need to specify the schema of this 
transcript.
+
+.. code-block:: none
+
+  {
+"schemaName": "transcript",
+"dimensionFieldSpecs": [
+  {
+"name": "studentID",
+"dataType": "STRING"
+  },
+  {
+"name": "firstName",
+"dataType": "STRING"
+  },
+  {
+"name": "lastName",
+"dataType": "STRING"
+  },
+  {
+"name": "gender",
+"dataType": "STRING"
+  },
+  {
+"name": "subject",
+"dataType": "STRING"
+  }
+],
+"metricFieldSpecs": [
+  {
+"name": "score",
+"dataType": "FLOAT"
+  }
+]
+  }
+
+To upload the schema, we can use the command below:
+
+.. code-block:: none
+
+  $ 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/bin/pinot-admin.sh
 AddSchema -schemaFile /Users/jlli/transcript-schema.json -exec
+  Executing command: AddSchema -controllerHost 172.25.119.20 -controllerPort 
9000 -schemaFilePath /Users/jlli/transcript-schema.json -exec
 
 Review comment:
   Please remove specific ip and host name references


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: Add documentation (#3986)

2019-03-20 Thread sunithabeeram
This is an automated email from the ASF dual-hosted git repository.

sunithabeeram pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 59fd4aa  Add documentation (#3986)
59fd4aa is described below

commit 59fd4aab4480b6c62ba0bfe61f1a976d0ca31221
Author: Jialiang Li 
AuthorDate: Wed Mar 20 06:47:45 2019 -0700

Add documentation (#3986)
---
 docs/img/generate-segment.png | Bin 218597 -> 0 bytes
 docs/img/list-schemas.png | Bin 8952 -> 247946 bytes
 docs/img/pinot-console.png| Bin 0 -> 157310 bytes
 docs/img/query-table.png  | Bin 35914 -> 0 bytes
 docs/img/rebalance-table.png  | Bin 0 -> 164989 bytes
 docs/img/upload-segment.png   | Bin 13944 -> 0 bytes
 docs/management_api.rst   |  68 +++---
 7 files changed, 58 insertions(+), 10 deletions(-)

diff --git a/docs/img/generate-segment.png b/docs/img/generate-segment.png
deleted file mode 100644
index 5848781..000
Binary files a/docs/img/generate-segment.png and /dev/null differ
diff --git a/docs/img/list-schemas.png b/docs/img/list-schemas.png
index 9b00855..3bd0685 100644
Binary files a/docs/img/list-schemas.png and b/docs/img/list-schemas.png differ
diff --git a/docs/img/pinot-console.png b/docs/img/pinot-console.png
new file mode 100644
index 000..f73405b
Binary files /dev/null and b/docs/img/pinot-console.png differ
diff --git a/docs/img/query-table.png b/docs/img/query-table.png
deleted file mode 100644
index 5859f2b..000
Binary files a/docs/img/query-table.png and /dev/null differ
diff --git a/docs/img/rebalance-table.png b/docs/img/rebalance-table.png
new file mode 100644
index 000..65de953
Binary files /dev/null and b/docs/img/rebalance-table.png differ
diff --git a/docs/img/upload-segment.png b/docs/img/upload-segment.png
deleted file mode 100644
index edd348d..000
Binary files a/docs/img/upload-segment.png and /dev/null differ
diff --git a/docs/management_api.rst b/docs/management_api.rst
index 5a7a0e4..e3f3c9d 100644
--- a/docs/management_api.rst
+++ b/docs/management_api.rst
@@ -26,26 +26,74 @@ Pinot Management Console
 
 
 There is a REST API which allows management of tables, tenants, segments and 
schemas. It can be accessed by going to
-``http://[controller host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API.
+``http://[controller host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API. The below
+is the screenshot of the console.
 
-For example, list all the schema within Pinot cluster:
+  .. figure:: img/pinot-console.png
+
+For example, to list all the schemas within Pinot cluster:
 
   .. figure:: img/list-schemas.png
 
-Upload a pinot segment:
+To rebalance segments of a table across servers:
 
-  .. figure:: img/upload-segment.png
+  .. figure:: img/rebalance-table.png
 
 
-Pinot-admin.sh
+pinot-admin.sh
 --
 
-It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants.
+It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants. The script
+can be generated by running ``mvn install package -DskipTests -Pbin-dist`` in 
the directory
+in which you checked out Pinot.
+
+For example, to create a pinot segment:
+
+.. code-block:: none
 
-For example, create a pinot segment:
+  $ 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/bin/pinot-admin.sh
 CreateSegment -dataDir /Users/jlli/Desktop/test/ -format CSV -outDir 
/Users/jlli/Desktop/test2/ -tableName baseballStats -segmentName 
baseballStats_data -overwrite -schemaFile 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json
+  Executing command: CreateSegment  -generatorConfigFile null -dataDir 
/Users/jlli/Desktop/test/ -format CSV -outDir /Users/jlli/Desktop/test2/ 
-overwrite true -tableName baseballStats -segmentName baseballStats_data 
-timeColumnName null -schemaFile 
./pinot-distribution/target/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/apache-pinot-incubating-0.1.0-SNAPSHOT-bin/sample_data/baseballStats_schema.json
 -readerConfigFile null -enableStarTreeIndex false -starTreeIndexSpecFile null 
-hllSize 9 - [...]
+  Accepted files: [/Users/jlli/Desktop/test/baseballStats_data.csv]
+  Finished building StatsCollector!
+  Collected stats for 97889 documents
+  Created dictionary for INT column: homeRuns with cardinality: 67, range: 0 
to 73
+  Created dictionary for INT column: playerStint with cardinality: 5, range: 1 
to 5
+  Created dictionary for INT column: groundedIntoDoublePlays with cardinality: 
35, range: 0 to 36
+  Created dictionary for INT column: numberOfGames with cardinality: 165, 
range: 1 to 165
+  Created 

[incubator-pinot] branch add-docs deleted (was 9b50d89)

2019-03-20 Thread sunithabeeram
This is an automated email from the ASF dual-hosted git repository.

sunithabeeram pushed a change to branch add-docs
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was 9b50d89  Add documentation

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram merged pull request #3986: Modify documentation in managing Pinot page

2019-03-20 Thread GitBox
sunithabeeram merged pull request #3986: Modify documentation in managing Pinot 
page
URL: https://github.com/apache/incubator-pinot/pull/3986
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] sunithabeeram commented on a change in pull request #3986: Modify documentation in managing Pinot page

2019-03-20 Thread GitBox
sunithabeeram commented on a change in pull request #3986: Modify documentation 
in managing Pinot page
URL: https://github.com/apache/incubator-pinot/pull/3986#discussion_r267344684
 
 

 ##
 File path: docs/management_api.rst
 ##
 @@ -26,26 +26,74 @@ Pinot Management Console
 
 
 There is a REST API which allows management of tables, tenants, segments and 
schemas. It can be accessed by going to
-``http://[controller host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API.
+``http://[controller host]/help`` which offers a web UI to do these tasks, as 
well as document the REST API. The below
+is the screenshot of the console.
 
-For example, list all the schema within Pinot cluster:
+  .. figure:: img/pinot-console.png
+
+For example, to list all the schemas within Pinot cluster:
 
   .. figure:: img/list-schemas.png
 
-Upload a pinot segment:
+To rebalance segments of a table across servers:
 
-  .. figure:: img/upload-segment.png
+  .. figure:: img/rebalance-table.png
 
 
-Pinot-admin.sh
+pinot-admin.sh
 --
 
-It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants.
+It can be used instead of the ``pinot-admin.sh`` commands to automate the 
creation of tables and tenants. The script
 
 Review comment:
   What can be used instead of pinot-admin.sh? (The section title seems to be 
pinot-admin.sh...)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org