[incubator-pinot] branch nested-object-indexing updated (c2d822d -> 3140bb9)

2019-03-20 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch nested-object-indexing
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard c2d822d  fixing license header
 discard 635c451  Adding support for bytes type in realtime + nested object 
indexing
 discard 946499e  Wiring up end to end to support indexing nested fields on 
complex objects
 discard 9c09912e Adding support for Object Type
 discard 1f6b4c6  Enhancing PQL to support MATCHES predicate, can be used for 
searching within text, map, json and other complex objects
 discard 9099d30  Adding support for MATCHES Predicate
 new 581464e  Spelling correction (#3977)
 new 636c6c1  [TE] frontend - harleyjj/home - set default date picker to 
yesterday (#3976)
 new 67b729d  [TE] yaml - more validation on max duration (#3982)
 new 9e8e373  [TE] detection - preview a yaml with existing anomalies 
(#3983)
 new 565171f  add config to control kafka fetcher size and increase default 
(#3869)
 new f815e2e  [TE] detection - align metric slices (#3981)
 new df62374  [TE] frontend - harleyjj/preview - default preview to 2 days 
to accomodate daily metrics (#3980)
 new 2c5d42a  [TE] frontend - harleyjj/report-anomaly - adds back 
report-anomaly modal to alert overview (#3985)
 new d2a3d84  [TE] Fix for delayed anomalies due to watermark bug (#3984)
 new fe203b5  Pinot server side change to optimize LLC segment completion 
with direct metadata upload.  (#3941)
 new f26b2f3  [TE] Remove deprecated legacy logic in user dashboard (#3988)
 new 31f4fd0  [TE] frontend - harleyjj/edit-alert - update endpoint for 
preview when editing alert (#3987)
 new 59fd4aa  Add documentation (#3986)
 new 98dcebc  Add experiment section in getting started (#3989)
 new 205ec50  Update managing pinot doc (#3991)
 new d8061f3  [TE] frontend - harleyjj/edit-alert - fix subscription group 
put bug (#3995)
 new 6eb8e79  Fixing type casting issue for BYTES type values during 
realtime segment persistence (#3992)
 new d78a807  [TE] Clean up the yaml editor calls and messages (#3996)
 new e8ac3b3  Adding support for MATCHES Predicate
 new 35eace3  Enhancing PQL to support MATCHES predicate, can be used for 
searching within text, map, json and other complex objects
 new 0b2ddf0  Adding support for Object Type
 new 5e394ca  Wiring up end to end to support indexing nested fields on 
complex objects
 new 2bcdfef  Adding support for bytes type in realtime + nested object 
indexing
 new 48f5b40  fixing license header
 new 3140bb9  Adding simple avro msg decoder which could read avro schema 
from table creation config

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c2d822d)
\
 N -- N -- N   refs/heads/nested-object-indexing (3140bb9)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5870 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/client_api.rst|   2 +-
 docs/getting_started.rst   | 156 +
 docs/img/generate-segment.png  | Bin 218597 -> 0 bytes
 docs/img/list-schemas.png  | Bin 8952 -> 247946 bytes
 docs/img/pinot-console.png | Bin 0 -> 157310 bytes
 docs/img/query-table.png   | Bin 35914 -> 0 bytes
 docs/img/rebalance-table.png   | Bin 0 -> 164989 bytes
 docs/img/upload-segment.png| Bin 13944 -> 0 bytes
 docs/management_api.rst|  67 +++--
 .../protocols/SegmentCompletionProtocol.java   |   6 +
 .../apache/pinot/common/utils/CommonConstants.java |   1 +
 .../common/utils/FileUploadDownloadClient.java |  24 
 .../manager/config/InstanceDataManagerConfig.java  |   2 +
 .../realtime/LLRealtimeSegmentDataManager.java |  48 ++-
 .../converter/stats/RealtimeColumnStatistics.java  |  20 ++-
 .../impl/kafka/KafkaConnectionHandler.java |  26 ++--
 .../impl/kafka/KafkaLowLevelStreamConfig.java  |  34 -
 .../impl/kafka/KafkaPartitionLevelConsumer.java|  21 ++-

[incubator-pinot] branch master updated: Exit 1 when caught exception in Pinot Admin command. (#4065)

2019-04-04 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 641e401  Exit 1 when caught exception in Pinot Admin command. (#4065)
641e401 is described below

commit 641e401061b33ede09868e4a2d488515a8f610ce
Author: Xiang Fu 
AuthorDate: Thu Apr 4 16:12:01 2019 -0700

Exit 1 when caught exception in Pinot Admin command. (#4065)

* Exit 1 when caught exception in Pinot Admin command.

* Only exit(1) for failure requets
---
 .../pinot/integration/tests/ChaosMonkeyIntegrationTest.java  | 1 +
 .../java/org/apache/pinot/tools/admin/PinotAdministrator.java| 9 +
 .../org/apache/pinot/tools/admin/command/AddTableCommand.java| 5 +
 3 files changed, 11 insertions(+), 4 deletions(-)

diff --git 
a/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/ChaosMonkeyIntegrationTest.java
 
b/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/ChaosMonkeyIntegrationTest.java
index 8dcb9ed..5bcd7d6 100644
--- 
a/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/ChaosMonkeyIntegrationTest.java
+++ 
b/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/ChaosMonkeyIntegrationTest.java
@@ -52,6 +52,7 @@ public class ChaosMonkeyIntegrationTest {
 
   private Process runAdministratorCommand(String[] args) {
 String classpath = System.getProperty("java.class.path");
+System.getProperties().setProperty("pinot.admin.system.exit", "false");
 List completeArgs = new ArrayList<>();
 completeArgs.add("java");
 completeArgs.add("-Xms4G");
diff --git 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
index b2c3335..f490e2c 100644
--- 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
+++ 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
@@ -81,8 +81,7 @@ public class PinotAdministrator {
 return _status;
   }
 
-  public void execute(String[] args)
-  throws Exception {
+  public void execute(String[] args) {
 try {
   CmdLineParser parser = new CmdLineParser(this);
   parser.parseArgument(args);
@@ -102,10 +101,12 @@ public class PinotAdministrator {
 }
   }
 
-  public static void main(String[] args)
-  throws Exception {
+  public static void main(String[] args) {
 PinotAdministrator pinotAdministrator = new PinotAdministrator();
 pinotAdministrator.execute(args);
+if (!System.getProperties().getProperty("pinot.admin.system.exit", 
"true").equalsIgnoreCase("false")) {
+  System.exit(pinotAdministrator.getStatus() ? 0 : 1);
+}
   }
 
   public void printUsage() {
diff --git 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/command/AddTableCommand.java
 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/command/AddTableCommand.java
index 7079303..bc57050 100644
--- 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/command/AddTableCommand.java
+++ 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/command/AddTableCommand.java
@@ -87,6 +87,11 @@ public class AddTableCommand extends 
AbstractBaseAdminCommand implements Command
 return this;
   }
 
+  public AddTableCommand setControllerHost(String controllerHost) {
+_controllerHost = controllerHost;
+return this;
+  }
+
   public AddTableCommand setControllerPort(String controllerPort) {
 _controllerPort = controllerPort;
 return this;


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch hex_string_for_byte_array created (now 0a7f4bf)

2019-03-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch hex_string_for_byte_array
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at 0a7f4bf  Use hex string as the representation of byte array for queries

This branch includes the following new commits:

 new 0a7f4bf  Use hex string as the representation of byte array for queries

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch hex_string_for_byte_array updated: Adding test for indexing hexstring

2019-03-30 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch hex_string_for_byte_array
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/hex_string_for_byte_array by 
this push:
 new 04e6cfa  Adding test for indexing hexstring
04e6cfa is described below

commit 04e6cfa8c4b6991ee8becb3e090ea7c133a0e69e
Author: Xiang Fu 
AuthorDate: Fri Mar 29 23:25:32 2019 -0700

Adding test for indexing hexstring
---
 .../core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java  | 4 ++--
 .../core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java   | 4 ++--
 .../pinot/core/realtime/impl/dictionary/MutableDictionaryTest.java| 4 +++-
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
index 134b7f0..16ca1ce 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
@@ -57,7 +57,6 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
 
   @Override
   public int indexOf(Object rawValue) {
-assert rawValue instanceof byte[];
 byte[] bytes = null;
 // Convert hex string to byte[].
 if (rawValue instanceof String) {
@@ -67,6 +66,7 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
 Utils.rethrowException(e);
   }
 } else {
+  assert rawValue instanceof byte[];
   bytes = (byte[]) rawValue;
 }
 return getDictId(new ByteArray(bytes), bytes);
@@ -95,7 +95,6 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
 
   @Override
   public void index(@Nonnull Object rawValue) {
-assert rawValue instanceof byte[];
 byte[] bytes = null;
 // Convert hex string to byte[].
 if (rawValue instanceof String) {
@@ -105,6 +104,7 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
 Utils.rethrowException(e);
   }
 } else {
+  assert rawValue instanceof byte[];
   bytes = (byte[]) rawValue;
 }
 ByteArray byteArray = new ByteArray(bytes);
diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
index f0ebe67..91f51b2 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
@@ -36,7 +36,6 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
 
   @Override
   public int indexOf(Object rawValue) {
-assert rawValue instanceof byte[];
 byte[] bytes = null;
 // Convert hex string to byte[].
 if (rawValue instanceof String) {
@@ -46,6 +45,7 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
 Utils.rethrowException(e);
   }
 } else {
+  assert rawValue instanceof byte[];
   bytes = (byte[]) rawValue;
 }
 return getDictId(new ByteArray(bytes));
@@ -63,7 +63,6 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
 
   @Override
   public void index(@Nonnull Object rawValue) {
-assert rawValue instanceof byte[];
 byte[] bytes = null;
 // Convert hex string to byte[].
 if (rawValue instanceof String) {
@@ -73,6 +72,7 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
 Utils.rethrowException(e);
   }
 } else {
+  assert rawValue instanceof byte[];
   bytes = (byte[]) rawValue;
 }
 ByteArray byteArray = new ByteArray(bytes);
diff --git 
a/pinot-core/src/test/java/org/apache/pinot/core/realtime/impl/dictionary/MutableDictionaryTest.java
 
b/pinot-core/src/test/java/org/apache/pinot/core/realtime/impl/dictionary/MutableDictionaryTest.java
index ce79b10..99a28d2 100644
--- 
a/pinot-core/src/test/java/org/apache/pinot/core/realtime/impl/dictionary/MutableDictionaryTest.java
+++ 
b/pinot-core/src/test/java/org/apache/pinot/core/realtime/impl/dictionary/MutableDictionaryTest.java
@@ -30,6 +30,7 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 import java.util.stream.Collectors;
+import org.apache.commons.codec.binary.Hex;
 import org.apache.commons.lang.RandomStringUtils;
 import org.apache.pinot.common.data.FieldSpec;
 import org.apache.pinot.common.utils.primitive.ByteArray

[incubator-pinot] branch hex_string_for_byte_array updated (807e15c -> 79305e8)

2019-03-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch hex_string_for_byte_array
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 807e15c  Use hex string as the representation of byte array for queries
 new 79305e8  Use hex string as the representation of byte array for queries

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (807e15c)
\
 N -- N -- N   refs/heads/hex_string_for_byte_array (79305e8)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5898 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../dictionary/BytesOffHeapMutableDictionary.java  | 29 --
 .../dictionary/BytesOnHeapMutableDictionary.java   | 25 +--
 2 files changed, 39 insertions(+), 15 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Use hex string as the representation of byte array for queries

2019-03-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch hex_string_for_byte_array
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 0a7f4bf17662a7d801c07c88b05b00d31cf04be5
Author: Xiang Fu 
AuthorDate: Fri Mar 29 18:39:17 2019 -0700

Use hex string as the representation of byte array for queries
---
 .../dictionary/BytesOffHeapMutableDictionary.java   | 17 +++--
 .../dictionary/BytesOnHeapMutableDictionary.java| 21 +++--
 2 files changed, 34 insertions(+), 4 deletions(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
index 816e0de..fd0964e 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
@@ -21,6 +21,7 @@ package org.apache.pinot.core.realtime.impl.dictionary;
 import java.io.IOException;
 import java.util.Arrays;
 import javax.annotation.Nonnull;
+import javax.xml.bind.DatatypeConverter;
 import org.apache.pinot.common.utils.primitive.ByteArray;
 import org.apache.pinot.core.io.readerwriter.PinotDataBufferMemoryManager;
 import org.apache.pinot.core.io.writer.impl.MutableOffHeapByteArrayStore;
@@ -53,7 +54,13 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
   @Override
   public int indexOf(Object rawValue) {
 assert rawValue instanceof byte[];
-byte[] bytes = (byte[]) rawValue;
+byte[] bytes;
+// Convert hex string to byte[].
+if (rawValue instanceof String) {
+  bytes = DatatypeConverter.parseHexBinary((String) rawValue);
+} else {
+  bytes = (byte[]) rawValue;
+}
 return getDictId(new ByteArray(bytes), bytes);
   }
 
@@ -81,7 +88,13 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
   @Override
   public void index(@Nonnull Object rawValue) {
 assert rawValue instanceof byte[];
-byte[] bytes = (byte[]) rawValue;
+byte[] bytes;
+// Convert hex string to byte[].
+if (rawValue instanceof String) {
+  bytes = DatatypeConverter.parseHexBinary((String) rawValue);
+} else {
+  bytes =  (byte[]) rawValue;
+}
 ByteArray byteArray = new ByteArray(bytes);
 indexValue(byteArray, bytes);
 updateMinMax(byteArray);
diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
index df36098..9dbee55 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
@@ -18,8 +18,10 @@
  */
 package org.apache.pinot.core.realtime.impl.dictionary;
 
+import java.nio.charset.StandardCharsets;
 import java.util.Arrays;
 import javax.annotation.Nonnull;
+import javax.xml.bind.DatatypeConverter;
 import org.apache.pinot.common.utils.primitive.ByteArray;
 
 
@@ -27,13 +29,21 @@ import org.apache.pinot.common.utils.primitive.ByteArray;
  * OnHeap mutable dictionary of Bytes type.
  */
 public class BytesOnHeapMutableDictionary extends BaseOnHeapMutableDictionary {
+
   private ByteArray _min = null;
   private ByteArray _max = null;
 
   @Override
   public int indexOf(Object rawValue) {
 assert rawValue instanceof byte[];
-return getDictId(new ByteArray((byte[]) rawValue));
+byte[] bytes;
+// Convert hex string to byte[].
+if (rawValue instanceof String) {
+  bytes = DatatypeConverter.parseHexBinary((String) rawValue);
+} else {
+  bytes = (byte[]) rawValue;
+}
+return getDictId(new ByteArray(bytes));
   }
 
   @Override
@@ -49,7 +59,14 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
   @Override
   public void index(@Nonnull Object rawValue) {
 assert rawValue instanceof byte[];
-ByteArray byteArray = new ByteArray((byte[]) rawValue);
+byte[] bytes;
+// Convert hex string to byte[].
+if (rawValue instanceof String) {
+  bytes = DatatypeConverter.parseHexBinary((String) rawValue);
+} else {
+  bytes =  (byte[]) rawValue;
+}
+ByteArray byteArray = new ByteArray(bytes);
 indexValue(byteArray);
 updateMinMax(byteArray);
   }


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch hex_string_for_byte_array updated (0a7f4bf -> 807e15c)

2019-03-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch hex_string_for_byte_array
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 0a7f4bf  Use hex string as the representation of byte array for queries
 new 807e15c  Use hex string as the representation of byte array for queries

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (0a7f4bf)
\
 N -- N -- N   refs/heads/hex_string_for_byte_array (807e15c)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 5898 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java  | 1 -
 1 file changed, 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch hex_string_for_byte_array updated: Address comments

2019-04-01 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch hex_string_for_byte_array
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/hex_string_for_byte_array by 
this push:
 new eac1351  Address comments
eac1351 is described below

commit eac135187c8ed61bc4df8ba24a8fc18130f5d9b0
Author: Xiang Fu 
AuthorDate: Mon Apr 1 10:32:31 2019 -0700

Address comments
---
 .../impl/dictionary/BytesOffHeapMutableDictionary.java| 11 +--
 .../impl/dictionary/BytesOnHeapMutableDictionary.java |  8 
 2 files changed, 9 insertions(+), 10 deletions(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
index 16ca1ce..5220749 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
@@ -51,15 +51,16 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
   public BytesOffHeapMutableDictionary(int estimatedCardinality, int 
maxOverflowHashSize,
   PinotDataBufferMemoryManager memoryManager, String allocationContext, 
int avgLength) {
 super(estimatedCardinality, maxOverflowHashSize, memoryManager, 
allocationContext);
-_byteStore = new MutableOffHeapByteArrayStore(memoryManager, 
allocationContext,
-estimatedCardinality, avgLength);
+_byteStore = new MutableOffHeapByteArrayStore(memoryManager, 
allocationContext, estimatedCardinality, avgLength);
   }
 
   @Override
   public int indexOf(Object rawValue) {
 byte[] bytes = null;
 // Convert hex string to byte[].
-if (rawValue instanceof String) {
+if (rawValue instanceof byte[]) {
+  bytes = (byte[]) rawValue;
+} else if (rawValue instanceof String) {
   try {
 bytes = Hex.decodeHex(((String) rawValue).toCharArray());
   } catch (DecoderException e) {
@@ -67,7 +68,6 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
   }
 } else {
   assert rawValue instanceof byte[];
-  bytes = (byte[]) rawValue;
 }
 return getDictId(new ByteArray(bytes), bytes);
   }
@@ -113,8 +113,7 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
   }
 
   @Override
-  public boolean inRange(@Nonnull String lower, @Nonnull String upper, int 
dictIdToCompare,
-  boolean includeLower,
+  public boolean inRange(@Nonnull String lower, @Nonnull String upper, int 
dictIdToCompare, boolean includeLower,
   boolean includeUpper) {
 throw new UnsupportedOperationException("In-range not supported for Bytes 
data type.");
   }
diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
index 91f51b2..37236c3 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
@@ -38,7 +38,9 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
   public int indexOf(Object rawValue) {
 byte[] bytes = null;
 // Convert hex string to byte[].
-if (rawValue instanceof String) {
+if (rawValue instanceof byte[]) {
+  bytes = (byte[]) rawValue;
+} else if (rawValue instanceof String) {
   try {
 bytes = Hex.decodeHex(((String) rawValue).toCharArray());
   } catch (DecoderException e) {
@@ -46,7 +48,6 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
   }
 } else {
   assert rawValue instanceof byte[];
-  bytes = (byte[]) rawValue;
 }
 return getDictId(new ByteArray(bytes));
   }
@@ -81,8 +82,7 @@ public class BytesOnHeapMutableDictionary extends 
BaseOnHeapMutableDictionary {
   }
 
   @Override
-  public boolean inRange(@Nonnull String lower, @Nonnull String upper, int 
dictIdToCompare,
-  boolean includeLower,
+  public boolean inRange(@Nonnull String lower, @Nonnull String upper, int 
dictIdToCompare, boolean includeLower,
   boolean includeUpper) {
 throw new UnsupportedOperationException("In-range not supported for Bytes 
data type.");
   }


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: Use hex string as the representation of byte array for queries (#4041)

2019-04-01 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 202009f  Use hex string as the representation of byte array for 
queries (#4041)
202009f is described below

commit 202009f973d64670f2a480c2546e4a7bb8f06e8a
Author: Xiang Fu 
AuthorDate: Mon Apr 1 14:56:04 2019 -0700

Use hex string as the representation of byte array for queries (#4041)

* Use hex string as the representation of byte array for queries

* Adding test for indexing hexstring

* Address comments
---
 .../dictionary/BytesOffHeapMutableDictionary.java  | 33 +---
 .../dictionary/BytesOnHeapMutableDictionary.java   | 35 +++---
 .../impl/dictionary/MutableDictionaryTest.java |  4 ++-
 3 files changed, 63 insertions(+), 9 deletions(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
index 816e0de..5220749 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOffHeapMutableDictionary.java
@@ -21,6 +21,9 @@ package org.apache.pinot.core.realtime.impl.dictionary;
 import java.io.IOException;
 import java.util.Arrays;
 import javax.annotation.Nonnull;
+import org.apache.commons.codec.DecoderException;
+import org.apache.commons.codec.binary.Hex;
+import org.apache.pinot.common.Utils;
 import org.apache.pinot.common.utils.primitive.ByteArray;
 import org.apache.pinot.core.io.readerwriter.PinotDataBufferMemoryManager;
 import org.apache.pinot.core.io.writer.impl.MutableOffHeapByteArrayStore;
@@ -30,6 +33,7 @@ import 
org.apache.pinot.core.io.writer.impl.MutableOffHeapByteArrayStore;
  * OffHeap mutable dictionary for Bytes data type.
  */
 public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary {
+
   private final MutableOffHeapByteArrayStore _byteStore;
 
   private ByteArray _min = null;
@@ -52,8 +56,19 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
 
   @Override
   public int indexOf(Object rawValue) {
-assert rawValue instanceof byte[];
-byte[] bytes = (byte[]) rawValue;
+byte[] bytes = null;
+// Convert hex string to byte[].
+if (rawValue instanceof byte[]) {
+  bytes = (byte[]) rawValue;
+} else if (rawValue instanceof String) {
+  try {
+bytes = Hex.decodeHex(((String) rawValue).toCharArray());
+  } catch (DecoderException e) {
+Utils.rethrowException(e);
+  }
+} else {
+  assert rawValue instanceof byte[];
+}
 return getDictId(new ByteArray(bytes), bytes);
   }
 
@@ -80,8 +95,18 @@ public class BytesOffHeapMutableDictionary extends 
BaseOffHeapMutableDictionary
 
   @Override
   public void index(@Nonnull Object rawValue) {
-assert rawValue instanceof byte[];
-byte[] bytes = (byte[]) rawValue;
+byte[] bytes = null;
+// Convert hex string to byte[].
+if (rawValue instanceof String) {
+  try {
+bytes = Hex.decodeHex(((String) rawValue).toCharArray());
+  } catch (DecoderException e) {
+Utils.rethrowException(e);
+  }
+} else {
+  assert rawValue instanceof byte[];
+  bytes = (byte[]) rawValue;
+}
 ByteArray byteArray = new ByteArray(bytes);
 indexValue(byteArray, bytes);
 updateMinMax(byteArray);
diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
index df36098..37236c3 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/impl/dictionary/BytesOnHeapMutableDictionary.java
@@ -20,6 +20,9 @@ package org.apache.pinot.core.realtime.impl.dictionary;
 
 import java.util.Arrays;
 import javax.annotation.Nonnull;
+import org.apache.commons.codec.DecoderException;
+import org.apache.commons.codec.binary.Hex;
+import org.apache.pinot.common.Utils;
 import org.apache.pinot.common.utils.primitive.ByteArray;
 
 
@@ -27,13 +30,26 @@ import org.apache.pinot.common.utils.primitive.ByteArray;
  * OnHeap mutable dictionary of Bytes type.
  */
 public class BytesOnHeapMutableDictionary extends BaseOnHeapMutableDictionary {
+
   private ByteArray _min = null;
   private ByteArray _max = null;
 
   @Override
   public int indexOf(Object rawValue) {
-assert rawValue instanceof byte[];
-return getDictId(new ByteArray((byte[]) rawValue));
+byte[] bytes

[incubator-pinot] branch hex_string_for_byte_array deleted (was 6f34c39)

2019-04-01 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch hex_string_for_byte_array
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was 6f34c39  Address comments

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 04/06: Wiring up end to end to support indexing nested fields on complex objects

2019-03-18 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch nested-object-indexing-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 224299deccf308ddfdab2e59556d413944596329
Author: kishore gopalakrishna 
AuthorDate: Thu Jan 24 19:53:29 2019 -0800

Wiring up end to end to support indexing nested fields on complex objects
---
 .../org/apache/pinot/common/data/FieldSpec.java|   3 +
 .../pinot/common/data/objects/TextObject.java  |   3 +-
 pinot-core/pom.xml |  19 ++-
 .../immutable/ImmutableSegmentLoader.java  |   4 +-
 .../io/reader/impl/v1/SortedIndexReaderImpl.java   |   6 +
 .../core/operator/filter/FilterOperatorUtils.java  |   8 +-
 .../filter/IndexBasedMatchesFilterOperator.java|  86 +++
 .../filter/ScanBasedMatchesFilterOperator.java |  33 ++--
 .../MatchesPredicateEvaluatorFactory.java  |  21 +--
 .../invertedindex/RealtimeInvertedIndexReader.java |   8 +-
 .../core/segment/creator/InvertedIndexCreator.java |   8 +
 .../core/segment/creator/impl/V1Constants.java |   1 +
 .../creator/impl/inv/LuceneIndexCreator.java   | 127 +++
 .../inv/OffHeapBitmapInvertedIndexCreator.java |   6 +
 .../impl/inv/OnHeapBitmapInvertedIndexCreator.java |   6 +
 .../index/column/PhysicalColumnIndexContainer.java |  22 ++-
 .../index/data/source/ColumnDataSource.java|   2 +-
 .../loader/invertedindex/InvertedIndexHandler.java | 106 -
 .../index/readers/BitmapInvertedIndexReader.java   |   7 +
 .../segment/index/readers/InvertedIndexReader.java |   8 +
 .../index/readers/LuceneInvertedIndexReader.java   | 157 +++
 .../virtualcolumn/DocIdVirtualColumnProvider.java  |   7 +-
 .../SingleStringVirtualColumnProvider.java |   7 +-
 .../tests/LuceneIndexClusterIntegrationTest.java   | 172 +
 pinot-perf/pom.xml |  35 -
 .../org/apache/pinot/perf/LuceneBenchmark.java |  80 ++
 .../apache/pinot/tools/perf/ZookeeperLauncher.java |   2 +-
 27 files changed, 887 insertions(+), 57 deletions(-)

diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java 
b/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
index 080f0e7..ad5a5a3 100644
--- a/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
+++ b/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
@@ -335,6 +335,9 @@ public abstract class FieldSpec implements 
Comparable, ConfigNodeLife
   case STRING:
 jsonSchema.set("type", convertStringsToJsonArray("null", "string"));
 return jsonSchema;
+  case BYTES:
+jsonSchema.set("type", convertStringsToJsonArray("null", "bytes"));
+return jsonSchema;
   default:
 throw new UnsupportedOperationException();
 }
diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/data/objects/TextObject.java
 
b/pinot-common/src/main/java/org/apache/pinot/common/data/objects/TextObject.java
index cf5f27e..c171c40 100644
--- 
a/pinot-common/src/main/java/org/apache/pinot/common/data/objects/TextObject.java
+++ 
b/pinot-common/src/main/java/org/apache/pinot/common/data/objects/TextObject.java
@@ -27,7 +27,8 @@ import com.google.common.collect.Lists;
 public class TextObject implements PinotObject {
 
   byte[] _bytes;
-  private static List _FIELDS = Lists.newArrayList("Content");
+  public static String DEFAULT_FIELD = "Content";
+  private static List _FIELDS = Lists.newArrayList(DEFAULT_FIELD);
 
   @Override
   public void init(byte[] bytes) {
diff --git a/pinot-core/pom.xml b/pinot-core/pom.xml
index 8f4efe2..4c8215a 100644
--- a/pinot-core/pom.xml
+++ b/pinot-core/pom.xml
@@ -196,7 +196,24 @@
 
   
 
-
+
+
+
+  org.apache.lucene
+  lucene-core
+  7.6.0
+
+
+  org.apache.lucene
+  lucene-queryparser
+  7.6.0
+
+
+  org.apache.lucene
+  lucene-analyzers-common
+  7.6.0
+
+
 
 
   org.mockito
diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/indexsegment/immutable/ImmutableSegmentLoader.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/indexsegment/immutable/ImmutableSegmentLoader.java
index 2ddb80c..3d7ac31 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/indexsegment/immutable/ImmutableSegmentLoader.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/indexsegment/immutable/ImmutableSegmentLoader.java
@@ -118,8 +118,8 @@ public class ImmutableSegmentLoader {
 SegmentDirectory.Reader segmentReader = segmentDirectory.createReader();
 Map indexContainerMap = new HashMap<>();
 for (Map.Entry entry : 
segmentMetadata.getColumnMetadataMap().entrySet()) {
-  indexContainerMap
-

[incubator-pinot] 06/06: Fixing license header

2019-03-18 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch nested-object-indexing-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit bdaa9c12938254819c09c05b7a589a24fd8c860d
Author: Xiang Fu 
AuthorDate: Mon Mar 18 22:27:55 2019 -0700

Fixing license header
---
 ...VarByteSingleColumnSingleValueReaderWriterTest.java | 18 ++
 1 file changed, 18 insertions(+)

diff --git 
a/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
 
b/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
index 0fc33db..2f1f386 100644
--- 
a/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
+++ 
b/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
@@ -1,3 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
 package org.apache.pinot.core.io.writer.impl;
 
 import org.apache.pinot.core.io.readerwriter.PinotDataBufferMemoryManager;


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 05/06: Adding support for bytes type in realtime + nested object indexing

2019-03-18 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch nested-object-indexing-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit a230c647551bf5de33646f58edeb690be2e54186
Author: kishore gopalakrishna 
AuthorDate: Tue Feb 19 09:29:40 2019 -0800

Adding support for bytes type in realtime + nested object indexing
---
 .../org/apache/pinot/common/data/FieldSpec.java|   1 +
 .../org/apache/pinot/common/data/PinotObject.java  |   2 +-
 .../pinot/common/data/PinotObjectFactory.java  |  67 +
 .../org/apache/pinot/core/common/Predicate.java|  28 +---
 .../realtime/LLRealtimeSegmentDataManager.java |   3 +-
 .../indexsegment/mutable/MutableSegmentImpl.java   | 149 -
 ...VarByteSingleColumnSingleValueReaderWriter.java | 143 
 .../core/realtime/impl/RealtimeSegmentConfig.java  |  18 ++-
 .../invertedindex/RealtimeInvertedIndexReader.java |  41 +-
 .../creator/impl/inv/LuceneIndexCreator.java   |  14 +-
 .../loader/invertedindex/InvertedIndexHandler.java |  29 +---
 .../index/readers/LuceneInvertedIndexReader.java   |   9 +-
 ...yteSingleColumnSingleValueReaderWriterTest.java |  31 +
 13 files changed, 433 insertions(+), 102 deletions(-)

diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java 
b/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
index ad5a5a3..cb43a4c 100644
--- a/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
+++ b/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
@@ -457,6 +457,7 @@ public abstract class FieldSpec implements 
Comparable, ConfigNodeLife
 case DOUBLE:
   return Double.BYTES;
 case BYTES:
+case STRING:
   // TODO: Metric size is only used for Star-tree generation, which is 
not supported yet.
   return MetricFieldSpec.UNDEFINED_METRIC_SIZE;
 default:
diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObject.java 
b/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObject.java
index 3f1ca33..9b68f73 100644
--- a/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObject.java
+++ b/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObject.java
@@ -50,7 +50,7 @@ public interface PinotObject {
   List getPropertyNames();
 
   /**
-   * @param fieldName
+   * @param propertyName
* @return the value of the property, it can be a single object or a list of 
objects.
*/
   Object getProperty(String propertyName);
diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObjectFactory.java
 
b/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObjectFactory.java
new file mode 100644
index 000..b15dc58
--- /dev/null
+++ 
b/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObjectFactory.java
@@ -0,0 +1,67 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.common.data;
+
+import java.util.List;
+import org.apache.pinot.common.data.objects.JSONObject;
+import org.apache.pinot.common.data.objects.MapObject;
+import org.apache.pinot.common.data.objects.TextObject;
+import org.apache.pinot.common.segment.fetcher.HdfsSegmentFetcher;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+/**
+ * Factory class that create PinotObject from bytes
+ */
+public class PinotObjectFactory {
+
+  private static final Logger LOGGER = 
LoggerFactory.getLogger(PinotObjectFactory.class);
+
+  public static PinotObject create(FieldSpec spec, byte[] buf) {
+return create(spec.getObjectType(), buf);
+  }
+
+  public static PinotObject create(String objectType, byte[] buf) {
+
+Class pinotObjectClazz;
+PinotObject pinotObject = null;
+try {
+  switch (objectType.toUpperCase()) {
+case "MAP":
+  pinotObjectClazz = MapObject.class;
+  break;
+case "JSON":
+  pinotObjectClazz = JSONObject.class;
+  break;
+case "TEXT":
+  pinotObjectClazz = TextObject.class;
+  break;

[incubator-pinot] 03/06: Adding support for Object Type

2019-03-18 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch nested-object-indexing-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 74f8e6eede16c3a8eba97d449feeeb70d486b76b
Author: kishore gopalakrishna 
AuthorDate: Sun Jan 20 17:49:05 2019 -0800

Adding support for Object Type
---
 .../org/apache/pinot/common/data/FieldSpec.java| 23 +-
 .../org/apache/pinot/common/data/PinotObject.java  | 58 +++
 .../pinot/common/data/objects/JSONObject.java  | 83 ++
 .../pinot/common/data/objects/MapObject.java   | 66 +
 .../pinot/common/data/objects/TextObject.java  | 53 ++
 .../creator/impl/SegmentColumnarIndexCreator.java  |  1 +
 .../core/segment/creator/impl/V1Constants.java |  1 +
 .../pinot/core/segment/index/ColumnMetadata.java   | 21 --
 8 files changed, 301 insertions(+), 5 deletions(-)

diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java 
b/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
index 30be748..080f0e7 100644
--- a/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
+++ b/pinot-common/src/main/java/org/apache/pinot/common/data/FieldSpec.java
@@ -86,6 +86,10 @@ public abstract class FieldSpec implements 
Comparable, ConfigNodeLife
 
   @ConfigKey("virtualColumnProvider")
   protected String _virtualColumnProvider;
+  
+  //Complex type that can be constructed from raw bytes stored e.g. map, json, 
text
+  @ConfigKey("objectType")
+  protected String _objectType;
 
   // Default constructor required by JSON de-serializer. DO NOT REMOVE.
   public FieldSpec() {
@@ -98,15 +102,21 @@ public abstract class FieldSpec implements 
Comparable, ConfigNodeLife
   public FieldSpec(String name, DataType dataType, boolean isSingleValueField, 
@Nullable Object defaultNullValue) {
 this(name, dataType, isSingleValueField, DEFAULT_MAX_LENGTH, 
defaultNullValue);
   }
-
+  
   public FieldSpec(String name, DataType dataType, boolean isSingleValueField, 
int maxLength,
   @Nullable Object defaultNullValue) {
+this(name, dataType, isSingleValueField, maxLength, defaultNullValue, 
null);
+  }
+  
+  public FieldSpec(String name, DataType dataType, boolean isSingleValueField, 
int maxLength,
+  @Nullable Object defaultNullValue, @Nullable String objectType) {
 _name = name;
 _dataType = dataType.getStoredType();
 _isSingleValueField = isSingleValueField;
 _maxLength = maxLength;
 setDefaultNullValue(defaultNullValue);
   }
+  
 
   public abstract FieldType getFieldType();
 
@@ -183,6 +193,16 @@ public abstract class FieldSpec implements 
Comparable, ConfigNodeLife
   _defaultNullValue = getDefaultNullValue(getFieldType(), _dataType, 
_stringDefaultNullValue);
 }
   }
+  
+  
+  
+  public String getObjectType() {
+return _objectType;
+  }
+
+  public void setObjectType(String objectType) {
+_objectType = objectType;
+  }
 
   private static Object getDefaultNullValue(FieldType fieldType, DataType 
dataType,
   @Nullable String stringDefaultNullValue) {
@@ -353,6 +373,7 @@ public abstract class FieldSpec implements 
Comparable, ConfigNodeLife
 result = EqualityUtils.hashCodeOf(result, _isSingleValueField);
 result = EqualityUtils.hashCodeOf(result, 
getStringValue(_defaultNullValue));
 result = EqualityUtils.hashCodeOf(result, _maxLength);
+result = EqualityUtils.hashCodeOf(result, _objectType);
 return result;
   }
 
diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObject.java 
b/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObject.java
new file mode 100644
index 000..3f1ca33
--- /dev/null
+++ b/pinot-common/src/main/java/org/apache/pinot/common/data/PinotObject.java
@@ -0,0 +1,58 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.common.data;
+
+import java.util.List;
+
+/**
+ * Common interface for complex Object types such as HyperLogLog, Map, JSON 
etc.
+ * Flow to convert byte[] to PinotObject
+ * - compute the objec

[incubator-pinot] 01/06: Adding support for MATCHES Predicate

2019-03-18 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch nested-object-indexing-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit a1d2d690221686a8073dfbadd015d5eecea0a6a4
Author: kishore gopalakrishna 
AuthorDate: Fri Jan 18 16:05:45 2019 -0800

Adding support for MATCHES Predicate
---
 .../antlr4/org/apache/pinot/pql/parsers/PQL2.g4|   5 +
 .../apache/pinot/common/config/Deserializer.java   |   5 +-
 .../pinot/common/request/AggregationInfo.java  |  73 +--
 .../apache/pinot/common/request/BrokerRequest.java | 131 +--
 .../pinot/common/request/FilterOperator.java   |  32 ++---
 .../apache/pinot/common/request/FilterQuery.java   |  87 ++---
 .../pinot/common/request/FilterQueryMap.java   |  64 +
 .../org/apache/pinot/common/request/GroupBy.java   | 137 +--
 .../pinot/common/request/HavingFilterQuery.java|  87 ++---
 .../pinot/common/request/HavingFilterQueryMap.java |  62 -
 .../pinot/common/request/InstanceRequest.java  | 111 
 .../apache/pinot/common/request/QuerySource.java   |  60 -
 .../org/apache/pinot/common/request/QueryType.java |  77 +--
 .../org/apache/pinot/common/request/Selection.java | 145 ++---
 .../apache/pinot/common/request/SelectionSort.java |  65 +
 .../pinot/common/response/ProcessingException.java |  66 +-
 .../apache/pinot/pql/parsers/Pql2AstListener.java  |  11 ++
 .../parsers/pql2/ast/MatchesPredicateAstNode.java  |  62 +
 .../org/apache/pinot/core/common/Predicate.java|  35 -
 .../core/common/predicate/MatchesPredicate.java|  50 +++
 .../MatchesPredicateEvaluatorFactory.java  |  69 ++
 .../predicate/PredicateEvaluatorProvider.java  |   6 +-
 22 files changed, 787 insertions(+), 653 deletions(-)

diff --git a/pinot-common/src/main/antlr4/org/apache/pinot/pql/parsers/PQL2.g4 
b/pinot-common/src/main/antlr4/org/apache/pinot/pql/parsers/PQL2.g4
index 5086182..17652ca 100644
--- a/pinot-common/src/main/antlr4/org/apache/pinot/pql/parsers/PQL2.g4
+++ b/pinot-common/src/main/antlr4/org/apache/pinot/pql/parsers/PQL2.g4
@@ -77,6 +77,7 @@ predicate:
   | betweenClause # BetweenPredicate
   | isClause  # IsPredicate
   | regexpLikeClause  # RegexpLikePredicate
+  | matchesClause # MatchesPredicate  
   ;
 
 inClause:
@@ -95,6 +96,9 @@ betweenClause:
 regexpLikeClause:
   REGEXP_LIKE '(' expression ',' literal ')';
 
+matchesClause:
+  expression MATCHES '(' literal ',' literal ')';
+
 booleanOperator: OR | AND;
 
 groupByClause: GROUP BY groupByList;
@@ -131,6 +135,7 @@ LIMIT: L I M I T;
 NOT : N O T;
 OR: O R;
 REGEXP_LIKE: R E G E X P '_' L I K E;
+MATCHES: M A T C H E S;
 ORDER: O R D E R;
 SELECT: S E L E C T;
 TOP: T O P;
diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/config/Deserializer.java 
b/pinot-common/src/main/java/org/apache/pinot/common/config/Deserializer.java
index 93c4d84..63646c4 100644
--- 
a/pinot-common/src/main/java/org/apache/pinot/common/config/Deserializer.java
+++ 
b/pinot-common/src/main/java/org/apache/pinot/common/config/Deserializer.java
@@ -404,8 +404,9 @@ public class Deserializer {
 config = config.resolve();
 
 try {
-  return deserialize(clazz, 
io.vavr.collection.HashSet.ofAll(config.entrySet())
-  .toMap(entry -> Tuple.of(entry.getKey(), 
entry.getValue().unwrapped())), "");
+  Map map = 
io.vavr.collection.HashSet.ofAll(config.entrySet())
+  .toMap(entry -> Tuple.of(entry.getKey(), 
entry.getValue().unwrapped()));
+  return deserialize(clazz, map, "");
 } catch (Exception e) {
   Utils.rethrowException(e);
   return null;
diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/request/AggregationInfo.java
 
b/pinot-common/src/main/java/org/apache/pinot/common/request/AggregationInfo.java
index aa0b3db..ab129ff 100644
--- 
a/pinot-common/src/main/java/org/apache/pinot/common/request/AggregationInfo.java
+++ 
b/pinot-common/src/main/java/org/apache/pinot/common/request/AggregationInfo.java
@@ -1,22 +1,4 @@
 /**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITI

[incubator-pinot] branch nested-object-indexing-1 created (now bdaa9c1)

2019-03-18 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch nested-object-indexing-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at bdaa9c1  Fixing license header

This branch includes the following new commits:

 new a1d2d69  Adding support for MATCHES Predicate
 new 91257aa  Enhancing PQL to support MATCHES predicate, can be used for 
searching within text, map, json and other complex objects
 new 74f8e6e  Adding support for Object Type
 new 224299d  Wiring up end to end to support indexing nested fields on 
complex objects
 new a230c64  Adding support for bytes type in realtime + nested object 
indexing
 new bdaa9c1  Fixing license header

The 6 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch nested-object-indexing updated: fixing license header

2019-03-19 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch nested-object-indexing
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/nested-object-indexing by this 
push:
 new c2d822d  fixing license header
c2d822d is described below

commit c2d822d8fd16d0e57d6b0f36c557ea89617f1a60
Author: Xiang Fu 
AuthorDate: Mon Mar 18 23:49:59 2019 -0700

fixing license header
---
 .../pinot/integration/tests/LuceneRealtimeTest.java| 18 ++
 1 file changed, 18 insertions(+)

diff --git 
a/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/LuceneRealtimeTest.java
 
b/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/LuceneRealtimeTest.java
index e2418b6..46e53fa 100644
--- 
a/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/LuceneRealtimeTest.java
+++ 
b/pinot-integration-tests/src/test/java/org/apache/pinot/integration/tests/LuceneRealtimeTest.java
@@ -1,3 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
 package org.apache.pinot.integration.tests;
 
 import java.io.File;


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch nested-object-indexing updated: Fixing license header

2019-03-18 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch nested-object-indexing
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/nested-object-indexing by this 
push:
 new 14b9397  Fixing license header
14b9397 is described below

commit 14b9397c06e4a56bca503d5902a7fffa087dfb56
Author: Xiang Fu 
AuthorDate: Mon Mar 18 22:27:55 2019 -0700

Fixing license header
---
 ...VarByteSingleColumnSingleValueReaderWriterTest.java | 18 ++
 1 file changed, 18 insertions(+)

diff --git 
a/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
 
b/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
index 0fc33db..2f1f386 100644
--- 
a/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
+++ 
b/pinot-core/src/test/java/org/apache/pinot/core/io/writer/impl/VarByteSingleColumnSingleValueReaderWriterTest.java
@@ -1,3 +1,21 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
 package org.apache.pinot.core.io.writer.impl;
 
 import org.apache.pinot.core.io.readerwriter.PinotDataBufferMemoryManager;


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: Fixing type casting issue for BYTES type values during realtime segment persistence (#3992)

2019-03-20 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 6eb8e79  Fixing type casting issue for BYTES type values during 
realtime segment persistence (#3992)
6eb8e79 is described below

commit 6eb8e7976499b43a5588e26c194e09d48ebca232
Author: Xiang Fu 
AuthorDate: Wed Mar 20 13:28:37 2019 -0700

Fixing type casting issue for BYTES type values during realtime segment 
persistence (#3992)
---
 .../converter/stats/RealtimeColumnStatistics.java| 20 ++--
 1 file changed, 14 insertions(+), 6 deletions(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
index 714a0a9..860d569 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/realtime/converter/stats/RealtimeColumnStatistics.java
@@ -22,6 +22,7 @@ import java.util.HashSet;
 import java.util.Set;
 import org.apache.pinot.common.config.ColumnPartitionConfig;
 import org.apache.pinot.common.data.FieldSpec;
+import org.apache.pinot.common.utils.primitive.ByteArray;
 import org.apache.pinot.core.common.Block;
 import org.apache.pinot.core.common.BlockMultiValIterator;
 import org.apache.pinot.core.data.partition.PartitionFunction;
@@ -150,17 +151,24 @@ public class RealtimeColumnStatistics implements 
ColumnStatistics {
 
 int docIdIndex = _sortedDocIdIterationOrder != null ? 
_sortedDocIdIterationOrder[0] : 0;
 int dictionaryId = singleValueReader.getInt(docIdIndex);
-Comparable previousValue = (Comparable) 
_dictionaryReader.get(dictionaryId);
+Object previousValue = _dictionaryReader.get(dictionaryId);
 for (int i = 1; i < blockLength; i++) {
   docIdIndex = _sortedDocIdIterationOrder != null ? 
_sortedDocIdIterationOrder[i] : i;
   dictionaryId = singleValueReader.getInt(docIdIndex);
-  Comparable currentValue = (Comparable) 
_dictionaryReader.get(dictionaryId);
+  Object currentValue = _dictionaryReader.get(dictionaryId);
   // If previousValue is greater than currentValue
-  if (0 < previousValue.compareTo(currentValue)) {
-return false;
-  } else {
-previousValue = currentValue;
+  switch (_block.getMetadata().getDataType().getStoredType()) {
+case BYTES:
+  if (0 < ByteArray.compare((byte[]) previousValue, (byte[]) 
currentValue)) {
+return false;
+  }
+  break;
+default:
+  if (0 < ((Comparable) previousValue).compareTo(currentValue)) {
+return false;
+  }
   }
+  previousValue = currentValue;
 }
 
 return true;


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Default PinotAdmin Commands to not use System.exit(...).

2019-04-12 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch fixing_admin_cmd_exit_code
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit ec3e8201157e4dcd44ce154d32a35097c90cd32a
Author: Xiang Fu 
AuthorDate: Fri Apr 12 10:43:06 2019 -0700

Default PinotAdmin Commands to not use System.exit(...).
---
 .../src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
index f490e2c..df4ab08 100644
--- 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
+++ 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
@@ -104,7 +104,7 @@ public class PinotAdministrator {
   public static void main(String[] args) {
 PinotAdministrator pinotAdministrator = new PinotAdministrator();
 pinotAdministrator.execute(args);
-if (!System.getProperties().getProperty("pinot.admin.system.exit", 
"true").equalsIgnoreCase("false")) {
+if (System.getProperties().getProperty("pinot.admin.system.exit", 
"false").equalsIgnoreCase("true")) {
   System.exit(pinotAdministrator.getStatus() ? 0 : 1);
 }
   }


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch fixing_admin_cmd_exit_code created (now ec3e820)

2019-04-12 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch fixing_admin_cmd_exit_code
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at ec3e820  Default PinotAdmin Commands to not use System.exit(...).

This branch includes the following new commits:

 new ec3e820  Default PinotAdmin Commands to not use System.exit(...).

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch fixing_admin_cmd_exit_code updated: Adding comments for pinot-admin cmd

2019-04-15 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch fixing_admin_cmd_exit_code
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/fixing_admin_cmd_exit_code by 
this push:
 new aa3ca47  Adding comments for pinot-admin cmd
aa3ca47 is described below

commit aa3ca47492ff5041c3af29303ba9319cec2133d5
Author: Xiang Fu 
AuthorDate: Mon Apr 15 14:05:23 2019 -0700

Adding comments for pinot-admin cmd
---
 .../java/org/apache/pinot/tools/admin/PinotAdministrator.java  | 10 ++
 1 file changed, 10 insertions(+)

diff --git 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
index df4ab08..f56b96b 100644
--- 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
+++ 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
@@ -63,6 +63,16 @@ import org.slf4j.LoggerFactory;
 /**
  * Class to implement Pinot Administrator, that provides the following 
commands:
  *
+ * System property: `pinot.admin.system.exit`(default to false) is used to 
decide if System.exit(...) will be called with exit code.
+ *
+ * Sample Usage in Commandline:
+ *  JAVA_OPTS="-Xms4G -Xmx4G -Dpinot.admin.system.exit=true" \
+ *  bin/pinot-admin.sh AddSchema \
+ *-schemaFile /my/path/to/schema/schema.json \
+ *-controllerHost localhost \
+ *-controllerPort 9000 \
+ *-exec
+ *
  */
 public class PinotAdministrator {
   private static final Logger LOGGER = 
LoggerFactory.getLogger(PinotAdministrator.class);


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch fixing_admin_cmd_exit_code deleted (was aa3ca47)

2019-04-15 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch fixing_admin_cmd_exit_code
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was aa3ca47  Adding comments for pinot-admin cmd

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: Default PinotAdmin Commands to not use System.exit(...). (#4110)

2019-04-15 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 05bd234  Default PinotAdmin Commands to not use System.exit(...). 
(#4110)
05bd234 is described below

commit 05bd2341576ea03d01ba97d55573339097c8
Author: Xiang Fu 
AuthorDate: Mon Apr 15 16:23:43 2019 -0700

Default PinotAdmin Commands to not use System.exit(...). (#4110)

* Default PinotAdmin Commands to not use System.exit(...).

* Adding comments for pinot-admin cmd
---
 .../org/apache/pinot/tools/admin/PinotAdministrator.java | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
index f490e2c..f56b96b 100644
--- 
a/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
+++ 
b/pinot-tools/src/main/java/org/apache/pinot/tools/admin/PinotAdministrator.java
@@ -63,6 +63,16 @@ import org.slf4j.LoggerFactory;
 /**
  * Class to implement Pinot Administrator, that provides the following 
commands:
  *
+ * System property: `pinot.admin.system.exit`(default to false) is used to 
decide if System.exit(...) will be called with exit code.
+ *
+ * Sample Usage in Commandline:
+ *  JAVA_OPTS="-Xms4G -Xmx4G -Dpinot.admin.system.exit=true" \
+ *  bin/pinot-admin.sh AddSchema \
+ *-schemaFile /my/path/to/schema/schema.json \
+ *-controllerHost localhost \
+ *-controllerPort 9000 \
+ *-exec
+ *
  */
 public class PinotAdministrator {
   private static final Logger LOGGER = 
LoggerFactory.getLogger(PinotAdministrator.class);
@@ -104,7 +114,7 @@ public class PinotAdministrator {
   public static void main(String[] args) {
 PinotAdministrator pinotAdministrator = new PinotAdministrator();
 pinotAdministrator.execute(args);
-if (!System.getProperties().getProperty("pinot.admin.system.exit", 
"true").equalsIgnoreCase("false")) {
+if (System.getProperties().getProperty("pinot.admin.system.exit", 
"false").equalsIgnoreCase("true")) {
   System.exit(pinotAdministrator.getStatus() ? 0 : 1);
 }
   }


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (8a9deed -> 7b18740)

2019-05-27 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from 8a9deed  Adding trift support in travis
 add 7b18740  Fixing compatibility issue with thrift v0.12.0

No new revisions were added by this update.

Summary of changes:
 pinot-common/pom.xml | 4 
 pom.xml  | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (a21a491 -> 7e214a3)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard a21a491  Adding trift support in travis
 add 7e214a3  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (a21a491)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (7e214a3)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (bfc7bde -> 6aba893)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard bfc7bde  Adding trift support in travis
 add 6aba893  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (bfc7bde)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (6aba893)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml| 2 ++
 .travis_install.sh | 4 ++--
 .travis_test.sh| 4 ++--
 3 files changed, 6 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (fb0eb08 -> 8726bdc)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from fb0eb08  Adding trift support in travis
 add 8726bdc  format pom

No new revisions were added by this update.

Summary of changes:
 pinot-common/pom.xml | 44 ++--
 1 file changed, 22 insertions(+), 22 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (ff26960 -> a21a491)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard ff26960  format pom
 discard fb0eb08  Adding trift support in travis
 add a21a491  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (ff26960)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (a21a491)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (7e214a3 -> 0ece827)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 7e214a3  Adding trift support in travis
 add 0ece827  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (7e214a3)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (0ece827)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (32f599c -> bfc7bde)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 32f599c  Adding trift support in travis
 add bfc7bde  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (32f599c)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (bfc7bde)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (0ece827 -> 32f599c)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 0ece827  Adding trift support in travis
 add 32f599c  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (0ece827)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (32f599c)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (6aba893 -> fc64bd7)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 6aba893  Adding trift support in travis
 add fc64bd7  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (6aba893)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (fc64bd7)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (fc64bd7 -> d791dba)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard fc64bd7  Adding trift support in travis
 add d791dba  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (fc64bd7)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (d791dba)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (8726bdc -> bd5072b)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 8726bdc  format pom
 add bd5072b  format pom

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (8726bdc)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (bd5072b)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../src/main/java/org/apache/pinot/common/config/Deserializer.java   | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (bd5072b -> ff26960)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard bd5072b  format pom
 add ff26960  format pom

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (bd5072b)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (ff26960)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (d791dba -> 3ac5948)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard d791dba  Adding trift support in travis
 add 3ac5948  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (d791dba)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (3ac5948)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (3ac5948 -> 3d5f4f6)

2019-05-26 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 3ac5948  Adding trift support in travis
 add 3d5f4f6  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (3ac5948)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (3d5f4f6)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (5d8354f -> bbd9208)

2019-05-28 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


omit 5d8354f  Address comments. Make default no profile for building thrift
 add bbd9208  Address comments. Make default no profile for building thrift

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (5d8354f)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (bbd9208)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 README.md| 2 +-
 docs/getting_started.rst | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (bbd9208 -> 3522e64)

2019-05-28 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


omit bbd9208  Address comments. Make default no profile for building thrift
 add 3522e64  Address comments. Make default no profile for building thrift

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (bbd9208)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (3522e64)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 pinot-common/pom.xml | 19 ---
 1 file changed, 19 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (7b18740 -> 5d8354f)

2019-05-28 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from 7b18740  Fixing compatibility issue with thrift v0.12.0
 add 5d8354f  Address comments. Make default no profile for building thrift

No new revisions were added by this update.

Summary of changes:
 .travis.yml  | 6 +++---
 .travis_install.sh   | 4 ++--
 .travis_test.sh  | 4 ++--
 pinot-common/pom.xml | 4 
 4 files changed, 7 insertions(+), 11 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (3d5f4f6 -> 8a9deed)

2019-05-27 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 3d5f4f6  Adding trift support in travis
 add 8a9deed  Adding trift support in travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (3d5f4f6)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (8a9deed)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis_install.sh   | 4 ++--
 .travis_test.sh  | 4 ++--
 README.md| 3 ++-
 docs/getting_started.rst | 4 ++--
 pinot-common/pom.xml | 2 +-
 5 files changed, 9 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (8b9cdaf -> 3ab069b)

2019-05-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from 8b9cdaf  Update doc for thrift executable
 add 3ab069b  update travis

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 1 +
 1 file changed, 1 insertion(+)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (3522e64 -> e0c2aa7)

2019-05-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from 3522e64  Address comments. Make default no profile for building thrift
 add e0c2aa7  make thrift executable configurable

No new revisions were added by this update.

Summary of changes:
 pinot-common/pom.xml | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (c50ff42 -> 5bd5f8e)

2019-05-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


omit c50ff42  update travis
 add 5bd5f8e  update travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c50ff42)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (5bd5f8e)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 1 +
 1 file changed, 1 insertion(+)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (e0c2aa7 -> 8b9cdaf)

2019-05-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from e0c2aa7  make thrift executable configurable
 add 8b9cdaf  Update doc for thrift executable

No new revisions were added by this update.

Summary of changes:
 docs/getting_started.rst | 2 ++
 1 file changed, 2 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (3ab069b -> c50ff42)

2019-05-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


omit 3ab069b  update travis
 add c50ff42  update travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (3ab069b)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (c50ff42)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch cleanup-autogen-files updated (5bd5f8e -> 18f21d9)

2019-05-29 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch cleanup-autogen-files
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 5bd5f8e  update travis
 add 18f21d9  update travis

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (5bd5f8e)
\
 N -- N -- N   refs/heads/cleanup-autogen-files (18f21d9)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .travis.yml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch fx19880617-patch-1 updated (11c3d5d -> 57b1e88)

2019-06-11 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch fx19880617-patch-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from 11c3d5d  Adding example of querying BYTES column
 add 57b1e88  Update pql_examples.rst

No new revisions were added by this update.

Summary of changes:
 docs/pql_examples.rst | 1 +
 1 file changed, 1 insertion(+)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding example of querying BYTES column

2019-06-11 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch fx19880617-patch-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 11c3d5de78909007db46521c8965eb71e4e7f68c
Author: Xiang Fu 
AuthorDate: Tue Jun 11 13:26:37 2019 -0700

Adding example of querying BYTES column
---
 docs/pql_examples.rst | 14 +++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/docs/pql_examples.rst b/docs/pql_examples.rst
index 17f0359..f0599ef 100644
--- a/docs/pql_examples.rst
+++ b/docs/pql_examples.rst
@@ -118,8 +118,17 @@ The examples below demonstrate the use of UDFs
   SELECT count(*) FROM myTable
 GROUP BY timeConvert(timeColumnName, 'SECONDS', 'DAYS')
 
-  SELECT count(*) FROM myTable
-GROUP BY div(tim
+Examples with BYTES column
+--
+
+Pinot supports queries on BYTES column using HEX string. The query response 
also uses hex string to represent bytes value.
+
+E.g. the query below fetches all the rows for a given UID.
+
+.. code-block:: sql
+
+  SELECT * FROM myTable
+WHERE UID = "c8b3bce0b378fc5ce8067fc271a34892"
 
 PQL Specification
 -
@@ -236,7 +245,6 @@ Supported transform functions
The ``VALUEIN`` transform function is especially useful when the same 
multi-valued column is both filtering column and grouping column.
*e.g.* ``VALUEIN(mvColumn, 3, 5, 15)``
 
-
 Differences with SQL
 
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch fx19880617-patch-1 created (now 11c3d5d)

2019-06-11 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch fx19880617-patch-1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at 11c3d5d  Adding example of querying BYTES column

This branch includes the following new commits:

 new 11c3d5d  Adding example of querying BYTES column

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/02: WIP: adding kafka 2 stream provider

2019-07-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 3f682d0c454263a8ab5361d751f9f5c9b30e715a
Author: Ananth Packkildurai 
AuthorDate: Wed Jun 12 18:31:22 2019 -0700

WIP: adding kafka 2 stream provider
---
 .../pinot-connector-kafka-2.0/README.md|  24 
 pinot-connectors/pinot-connector-kafka-2.0/pom.xml |  67 ++
 .../impl/kafka2/KafkaConnectionHandler.java|  61 +
 .../realtime/impl/kafka2/KafkaConsumerFactory.java |  49 +++
 .../realtime/impl/kafka2/KafkaMessageBatch.java|  65 ++
 .../impl/kafka2/KafkaPartitionConsumer.java|  51 
 .../kafka2/KafkaPartitionLevelStreamConfig.java| 144 +
 .../impl/kafka2/KafkaStreamConfigProperties.java   |  65 ++
 .../impl/kafka2/KafkaStreamMetadataProvider.java   |  81 
 .../realtime/impl/kafka2/MessageAndOffset.java |  49 +++
 pinot-connectors/pom.xml   |   2 +
 11 files changed, 658 insertions(+)

diff --git a/pinot-connectors/pinot-connector-kafka-2.0/README.md 
b/pinot-connectors/pinot-connector-kafka-2.0/README.md
new file mode 100644
index 000..cc1950c
--- /dev/null
+++ b/pinot-connectors/pinot-connector-kafka-2.0/README.md
@@ -0,0 +1,24 @@
+
+# Pinot connector for kafka 2.0.x
+
+This is an implementation of the kafka stream for kafka versions 2.0.x The 
version used in this implementation is kafka 2.0.0. This module compiles with 
version 2.0.0 as well, however we have not tested if it runs with the older 
versions.
+A stream plugin for another version of kafka, or another stream, can be added 
in a similar fashion. Refer to documentation on (Pluggable 
Streams)[https://pinot.readthedocs.io/en/latest/pluggable_streams.html] for the 
specfic interfaces to implement.
diff --git a/pinot-connectors/pinot-connector-kafka-2.0/pom.xml 
b/pinot-connectors/pinot-connector-kafka-2.0/pom.xml
new file mode 100644
index 000..f351219
--- /dev/null
+++ b/pinot-connectors/pinot-connector-kafka-2.0/pom.xml
@@ -0,0 +1,67 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+pinot-connectors
+org.apache.pinot
+0.2.0-SNAPSHOT
+..
+
+4.0.0
+
+pinot-connector-kafka-2.0
+
+
+${basedir}/../..
+2.0.0
+
+
+
+
+
+
+org.apache.kafka
+kafka-clients
+${kafka.version}
+
+
+org.slf4j
+slf4j-log4j12
+
+
+net.sf.jopt-simple
+jopt-simple
+
+
+org.scala-lang
+scala-library
+
+
+org.apache.zookeeper
+zookeeper
+
+
+
+
+
\ No newline at end of file
diff --git 
a/pinot-connectors/pinot-connector-kafka-2.0/src/main/java/org/apache/pinot/core/realtime/impl/kafka2/KafkaConnectionHandler.java
 
b/pinot-connectors/pinot-connector-kafka-2.0/src/main/java/org/apache/pinot/core/realtime/impl/kafka2/KafkaConnectionHandler.java
new file mode 100644
index 000..802062f
--- /dev/null
+++ 
b/pinot-connectors/pinot-connector-kafka-2.0/src/main/java/org/apache/pinot/core/realtime/impl/kafka2/KafkaConnectionHandler.java
@@ -0,0 +1,61 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.core.realtime.impl.kafka2;
+
+import org.apache.kafka.clients.consumer.Consumer;
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.serialization.BytesDeserializer;
+import org.apache.kafka.common.serialization.StringDeserializer;
+import org.apache.pinot.core.realti

[incubator-pinot] branch kafka_2.0 created (now d9e0316)

2019-07-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at d9e0316  Adding support for Kafka 2.0

This branch includes the following new commits:

 new 3f682d0  WIP: adding kafka 2 stream provider
 new d9e0316  Adding support for Kafka 2.0

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 02/02: Adding support for Kafka 2.0

2019-07-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit d9e031618c4c7fa28e64d62858aa7e4a36d6f279
Author: Xiang Fu 
AuthorDate: Mon Jul 1 16:17:25 2019 -0700

Adding support for Kafka 2.0
---
 pinot-connectors/pinot-connector-kafka-0.9/pom.xml |   5 +
 pinot-connectors/pinot-connector-kafka-2.0/pom.xml | 112 +---
 ...umerFactory.java => Kafka2ConsumerFactory.java} |  36 +--
 .../impl/kafka2/Kafka2ConsumerManager.java | 191 ++
 .../impl/kafka2/Kafka2HighLevelStreamConfig.java   | 135 ++
 .../realtime/impl/kafka2/Kafka2MessageBatch.java   |  61 +
 .../Kafka2PartitionLevelConnectionHandler.java |  67 +
 ...Kafka2PartitionLevelPartitionLevelConsumer.java |  65 +
 .../kafka2/Kafka2PartitionLevelStreamConfig.java   | 146 +++
 ...Kafka2PartitionLevelStreamMetadataProvider.java |  67 +
 ...ties.java => Kafka2StreamConfigProperties.java} |  32 +--
 .../impl/kafka2/Kafka2StreamLevelConsumer.java | 166 
 .../impl/kafka2/KafkaAvroMessageDecoder.java   | 290 +
 .../impl/kafka2/KafkaConnectionHandler.java|  61 -
 .../impl/kafka2/KafkaJSONMessageDecoder.java   |  63 +
 .../realtime/impl/kafka2/KafkaMessageBatch.java|  65 -
 .../impl/kafka2/KafkaPartitionConsumer.java|  51 
 .../kafka2/KafkaPartitionLevelStreamConfig.java| 144 --
 .../impl/kafka2/KafkaStreamMetadataProvider.java   |  81 --
 .../realtime/impl/kafka2/MessageAndOffset.java |  42 +--
 .../kafka2/KafkaPartitionLevelConsumerTest.java| 232 +
 .../KafkaPartitionLevelStreamConfigTest.java   | 161 
 .../impl/kafka2/utils/EmbeddedZooKeeper.java   |  60 +
 .../impl/kafka2/utils/MiniKafkaCluster.java| 175 +
 pinot-connectors/pom.xml   |  12 +
 25 files changed, 2024 insertions(+), 496 deletions(-)

diff --git a/pinot-connectors/pinot-connector-kafka-0.9/pom.xml 
b/pinot-connectors/pinot-connector-kafka-0.9/pom.xml
index ae0317e..852c29c 100644
--- a/pinot-connectors/pinot-connector-kafka-0.9/pom.xml
+++ b/pinot-connectors/pinot-connector-kafka-0.9/pom.xml
@@ -63,5 +63,10 @@
 
   
 
+
+  org.scala-lang
+  scala-library
+  2.10.5
+
   
 
diff --git a/pinot-connectors/pinot-connector-kafka-2.0/pom.xml 
b/pinot-connectors/pinot-connector-kafka-2.0/pom.xml
index f351219..2a9c155 100644
--- a/pinot-connectors/pinot-connector-kafka-2.0/pom.xml
+++ b/pinot-connectors/pinot-connector-kafka-2.0/pom.xml
@@ -22,46 +22,82 @@
 http://maven.apache.org/POM/4.0.0;
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
-
-pinot-connectors
-org.apache.pinot
-0.2.0-SNAPSHOT
-..
-
-4.0.0
+  
+pinot-connectors
+org.apache.pinot
+0.2.0-SNAPSHOT
+..
+  
+  4.0.0
+  pinot-connector-kafka-2.0
+  Pinot Connector Kafka 2.0
+  https://pinot.apache.org/
+  
+${basedir}/../..
+2.0.0
+  
 
-pinot-connector-kafka-2.0
+  
 
-
-${basedir}/../..
-2.0.0
-
+
+
+  org.apache.kafka
+  kafka-clients
+  ${kafka.version}
+  
+
+  org.slf4j
+  slf4j-log4j12
+
+
+  net.sf.jopt-simple
+  jopt-simple
+
+
+  org.scala-lang
+  scala-library
+
+
+  org.apache.zookeeper
+  zookeeper
+
+  
+
 
-
+
+  org.apache.kafka
+  kafka_2.12
+  ${kafka.version}
+  
+
+  org.slf4j
+  slf4j-log4j12
+
+
+  net.sf.jopt-simple
+  jopt-simple
+
+
+  org.scala-lang
+  scala-library
+
+
+  org.scala-lang
+  scala-reflect
+
+
+  org.apache.zookeeper
+  zookeeper
+
+  
+  test
+
 
-
-
-org.apache.kafka
-kafka-clients
-${kafka.version}
-
-
-org.slf4j
-slf4j-log4j12
-
-
-net.sf.jopt-simple
-jopt-simple
-
-
-org.scala-lang
-scala-library
-
-
-org.apache.zookeeper
-zookeeper
-
-
-
-
+
+  org.scala-lang
+  scala-library
+  2.12.8
+  test
+
+  
 
\ No newline at end of file
diff --git 
a/pinot-connectors/pinot-connector-kafka-2.0/src/main/java/org/apache/

[incubator-pinot] branch kafka_2.0 updated (c01cc2e -> a163b4d)

2019-07-14 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard c01cc2e  fixing the bytes type conversion and adding consumer test
 discard d9e0316  Adding support for Kafka 2.0
 discard 3f682d0  WIP: adding kafka 2 stream provider
 add 732a7b9  Add more key/value pairs into LOG2M_TO_SIZE_IN_BYTES in 
HllSizeUtils (#4398)
 add 49d8fa7  #4317 Feature/variable length bytes offline dictionary for 
indexing bytes and string dicts. (#4321)
 add 02f6181  CompletionConfig for realtime tables (#4367)
 add ae1ecef  [TE] add a thread pool to run preview tasks (#4405)
 add de62d8e  [TE] Make email template and subject pluggable under email 
alert scheme (#4409)
 add 7df1a43  [TE] Add Merger after metric grouper; other minor clean up 
(#4410)
 add e71ac1b  [TE] Do not authorize when auth is disabled (#4414)
 add 0b8af63  Fix codecov link from old url to new (#4412)
 add 07e2c72  Move tests for API resources to a different package (#4415)
 add 52f69d1  [TE] Added support for Vertica as a data source (#4404)
 add a7c419a  Adding Calcite SQL Parser and make another entry point to 
query Pinot (#4387)
 add e041ec5  [TE] frontend - harleyjj/detection-health - UI for model 
performance (#4413)
 add 6bec451  [TE] Inject the port from config into Commons mail SSL Smtp 
port (#4420)
 add dac9ae2  prompt on fail schema (#4395)
 add b6fe4e6  [TE] detection health ui adjustments (#4421)
 add bca6756  make kafka version number controlled by config number (#4396)
 add 0993974  [TE] Fix exception handling - Propagate and display the error 
message/exception on frontind (#4419)
 add aadcd36  In ClusterChangeMediator, stop enqueue/process changes if 
already stopped (#4422)
 add 52f9b08  Add pinot community inviter (#4424)
 add 2f691fc  [TE] detection health - coverage fix (#4428)
 add 079b86d  Config for overriding initial rows threshold in segment size 
auto tuning (#4376)
 add 50aaaf9  [TE] added ThirdEye configuration documents (#4429)
 add f671072  Add URIUtils class to handle URI/URL encoding/decoding (#4426)
 add 626c43a  Misc fix for controller tests (#4431)
 add 220df69  WIP: adding kafka 2 stream provider
 add cad9466  Adding support for Kafka 2.0
 add ee87138  fixing the bytes type conversion and adding consumer test
 add a163b4d  address comments

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c01cc2e)
\
 N -- N -- N   refs/heads/kafka_2.0 (a163b4d)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 README.md  |   4 +-
 .../java/org/apache/pinot/client/Connection.java   |  44 +-
 .../client/JsonAsyncHttpPinotClientTransport.java  |  30 +-
 .../apache/pinot/client/PinotClientTransport.java  |   6 +
 .../{PinotClientTransport.java => Request.java}|  34 +-
 .../apache/pinot/client/PreparedStatementTest.java |  40 +-
 .../apache/pinot/client/ResultSetGroupTest.java|  38 +-
 .../broker/broker/helix/ClusterChangeMediator.java |  12 +-
 .../requesthandler/BaseBrokerRequestHandler.java   |  28 +-
 ...enNestedPredicatesFilterQueryTreeOptimizer.java |   3 +-
 .../requesthandler/PinotQueryParserFactory.java|  49 +++
 .../broker/requesthandler/PinotQueryRequest.java   |  31 +-
 .../request/PqlAndCalciteSqlCompatibilityTest.java | 123 ++
 pinot-common/pom.xml   |   4 +
 .../pinot/common/config/CompletionConfig.java  |  63 +++
 .../apache/pinot/common/config/IndexingConfig.java |  20 +-
 .../SegmentsValidationAndRetentionConfig.java  |  30 +-
 .../apache/pinot/common/utils/CommonConstants.java |   9 +
 .../common/utils/FileUploadDownloadClient.java |  16 +-
 .../org/apache/pinot/common/utils/URIUtils.java|  88 
 .../pinot/common/utils/request/RequestUtils.java   |  55 ++-
 .../org/apache/pinot/filesystem/LocalPinotFS.java  |  91 ++--
 .../pinot/{pql => }/parsers/AbstractCompiler.java  |   7 +-
 .../utils/BrokerRequestComparisonUtils.java| 235 +++
 .../parsers/PinotQuery2BrokerRequestConverter.java |   5 +-
 .../org/apache/pinot/pql/parsers/Pql2Compiler.java |  63 +--
 .../parsers/pql2/ast/BetweenPredicateAstNode.java  |   2 +-
 .../pql2/ast/ComparisonPredica

[incubator-pinot] branch kafka_2.0 updated (a163b4d -> 27ea0c3)

2019-07-14 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


omit a163b4d  address comments
 add 27ea0c3  address comments

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (a163b4d)
\
 N -- N -- N   refs/heads/kafka_2.0 (27ea0c3)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 pinot-connectors/pinot-connector-kafka-0.9/pom.xml | 1 +
 pinot-connectors/pinot-connector-kafka-2.0/pom.xml | 9 +
 pom.xml| 1 -
 3 files changed, 6 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka_2.0 updated (27ea0c3 -> 6430f26)

2019-07-14 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 27ea0c3  address comments
 add 6430f26  address comments

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (27ea0c3)
\
 N -- N -- N   refs/heads/kafka_2.0 (6430f26)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../pinot/core/realtime/impl/kafka2/KafkaMessageBatch.java   |  1 +
 .../impl/kafka2/KafkaPartitionLevelConnectionHandler.java|  2 +-
 .../impl/kafka2/KafkaPartitionLevelConsumerTest.java | 12 ++--
 .../pinot/core/realtime/impl/kafka/MessageAndOffset.java |  2 +-
 4 files changed, 9 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka_2.0 updated (6430f26 -> bc96302)

2019-07-14 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


omit 6430f26  address comments
 add bc96302  address comments

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (6430f26)
\
 N -- N -- N   refs/heads/kafka_2.0 (bc96302)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../impl/kafka/KafkaPartitionLevelConsumerTest.java|  2 +-
 .../realtime/impl/kafka2/KafkaConsumerFactory.java |  4 ++--
 ...aProvider.java => KafkaStreamMetadataProvider.java} |  6 +++---
 .../impl/kafka2/KafkaPartitionLevelConsumerTest.java   | 18 +-
 4 files changed, 15 insertions(+), 15 deletions(-)
 rename 
pinot-connectors/pinot-connector-kafka-2.0/src/main/java/org/apache/pinot/core/realtime/impl/kafka2/{KafkaPartitionLevelStreamMetadataProvider.java
 => KafkaStreamMetadataProvider.java} (87%)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka_2.0 updated (d9e0316 -> c01cc2e)

2019-07-08 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from d9e0316  Adding support for Kafka 2.0
 add c01cc2e  fixing the bytes type conversion and adding consumer test

No new revisions were added by this update.

Summary of changes:
 .../realtime/impl/kafka2/Kafka2MessageBatch.java   |  7 ++-
 .../Kafka2PartitionLevelConnectionHandler.java |  3 +-
 ...Kafka2PartitionLevelPartitionLevelConsumer.java |  9 +--
 .../kafka2/KafkaPartitionLevelConsumerTest.java| 66 +-
 4 files changed, 74 insertions(+), 11 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka_2.0 updated (bc96302 -> 72d7f59)

2019-07-16 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard bc96302  address comments
 add 72d7f59  address comments

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (bc96302)
\
 N -- N -- N   refs/heads/kafka_2.0 (72d7f59)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 pinot-connectors/pinot-connector-kafka-0.9/pom.xml| 2 +-
 pinot-connectors/pinot-connector-kafka-2.0/pom.xml| 2 +-
 .../README.md | 2 +-
 .../pom.xml   | 4 ++--
 .../pinot/core/realtime/impl/kafka/KafkaAvroMessageDecoder.java   | 0
 .../pinot/core/realtime/impl/kafka/KafkaJSONMessageDecoder.java   | 0
 .../org/apache/pinot/core/realtime/impl/kafka/MessageAndOffset.java   | 0
 pinot-connectors/pom.xml  | 2 +-
 8 files changed, 6 insertions(+), 6 deletions(-)
 rename pinot-connectors/{pinot-connector-kafka-common => 
pinot-connector-kafka-base}/README.md (88%)
 rename pinot-connectors/{pinot-connector-kafka-common => 
pinot-connector-kafka-base}/pom.xml (93%)
 rename pinot-connectors/{pinot-connector-kafka-common => 
pinot-connector-kafka-base}/src/main/java/org/apache/pinot/core/realtime/impl/kafka/KafkaAvroMessageDecoder.java
 (100%)
 rename pinot-connectors/{pinot-connector-kafka-common => 
pinot-connector-kafka-base}/src/main/java/org/apache/pinot/core/realtime/impl/kafka/KafkaJSONMessageDecoder.java
 (100%)
 rename pinot-connectors/{pinot-connector-kafka-common => 
pinot-connector-kafka-base}/src/main/java/org/apache/pinot/core/realtime/impl/kafka/MessageAndOffset.java
 (100%)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 48c27aa6990aee37ab290cfabd0a4ebdd6f9d33a
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..f61ed86 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -177,6 +177,45 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+1. `replicasPerPartition` under `segmentsConfig` config is required to specify 
table replication.
+2. `stream.kafka.consumer.type` should be specified as `simple` to use 
partition level consumer.
+3. `stream.kafka.zk.broker.url` and `stream.kafka.broker.list` are required 
under `tableIndexConfig.streamConfigs` to provide kafka related information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 9dabfd9c73ae470f757a0210f2a84c0f71a90468
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 40 
 1 file changed, 40 insertions(+)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..99b619d 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -177,6 +177,46 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. `replicasPerPartition` under `segmentsConfig` config is required to specify 
table replication.
+2. `stream.kafka.consumer.type` should be specified as `simple` to use 
partition level consumer.
+3. `stream.kafka.zk.broker.url` and `stream.kafka.broker.list` are required 
under `tableIndexConfig.streamConfigs` to provide kafka related information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (811649a -> 2195733)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 811649a  Adding kafka 2.0 doc for using simple consumer
 new 2195733  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (811649a)
\
 N -- N -- N   refs/heads/kafka2.0_doc (2195733)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 1 +
 1 file changed, 1 insertion(+)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 2195733915a03da5ba6543310deeb889333ad640
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 40 
 1 file changed, 40 insertions(+)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..b15dbbd 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -177,6 +177,46 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+* `replicasPerPartition` under `segmentsConfig` config is required to specify 
table replication.
+* `stream.kafka.consumer.type` should be specified as `simple` to use 
partition level consumer.
+* `stream.kafka.zk.broker.url` and `stream.kafka.broker.list` are required 
under `tableIndexConfig.streamConfigs` to provide kafka related information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc created (now 15389e4)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at 15389e4  Adding kafka 2.0 doc for using simple consumer

This branch includes the following new commits:

 new 15389e4  Adding kafka 2.0 doc for using simple consumer

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 15389e41371c3f6670e23bab43ad714aaec5936b
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 38 ++
 1 file changed, 38 insertions(+)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..fcb5cba 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -177,6 +177,44 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+{
+  "tableName": "meetupRsvp",
+  "tableType": "REALTIME",
+  "segmentsConfig": {
+"timeColumnName": "mtime",
+"timeType": "MILLISECONDS",
+"segmentPushType": "APPEND",
+"segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+"schemaName": "meetupRsvp",
+"replication": "1",
+"replicasPerPartition": "1"
+  },
+  "tenants": {},
+  "tableIndexConfig": {
+"loadMode": "MMAP",
+"streamConfigs": {
+  "streamType": "kafka",
+  "stream.kafka.consumer.type": "simple",
+  "stream.kafka.topic.name": "meetupRSVPEvents",
+  "stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+  "stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+  "stream.kafka.zk.broker.url": "localhost:2191/kafka",
+  "stream.kafka.broker.list": "localhost:19092"
+}
+  },
+  "metadata": {
+"customConfigs": {}
+  }
+}
+
+Please note that,
+* `replicasPerPartition` under `segmentsConfig` config is required to specify 
table replication.
+* `stream.kafka.consumer.type` should be specified as `simple` to use 
partition level consumer.
+* `stream.kafka.zk.broker.url` and `stream.kafka.broker.list` are required 
under `tableIndexConfig.streamConfigs` to provide kafka related information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (80cc045 -> 14d281a)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 80cc045  Adding kafka 2.0 doc for using simple consumer
 new 14d281a  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (80cc045)
\
 N -- N -- N   refs/heads/kafka2.0_doc (14d281a)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 14d281a3918b6d13d385b3511a2c79581194fb26
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 46 --
 1 file changed, 44 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..7117534 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,9 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Similar to using Kafka 0.9 hlc consumer, Kafka 2.x hlc consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +179,50 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (14d281a -> 973b7c2)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 14d281a  Adding kafka 2.0 doc for using simple consumer
 new 973b7c2  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (14d281a)
\
 N -- N -- N   refs/heads/kafka2.0_doc (973b7c2)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 973b7c240e4a3b26f972e78226bbc28393339bd4
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 46 --
 1 file changed, 44 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..e4ca60e 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,9 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Kafka 2.x HLC consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +179,50 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (c4dceab -> dcc02e3)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard c4dceab  dding kafka 2.0 doc for using simple consumer
 new dcc02e3  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c4dceab)
\
 N -- N -- N   refs/heads/kafka2.0_doc (dcc02e3)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit dcc02e38305d2ade4e18a6ba2e4a196779494abb
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 51 --
 1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..9ffaf54 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,11 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+- **Use Kafka Stream(High) Level Consumer**
+
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Kafka 2.x HLC consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +181,53 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+
+- **Use Kafka Partition(Low) Level Consumer**
+
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note:
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+#. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+#. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (f7a1bd7 -> c4dceab)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard f7a1bd7  dding kafka 2.0 doc for using simple consumer
 new c4dceab  dding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (f7a1bd7)
\
 N -- N -- N   refs/heads/kafka2.0_doc (c4dceab)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (95832d0 -> 80cc045)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 95832d0  Adding kafka 2.0 doc for using simple consumer
 new 80cc045  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (95832d0)
\
 N -- N -- N   refs/heads/kafka2.0_doc (80cc045)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 80cc04544b40c8140654ffef909569f4bd3c27d5
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 46 --
 1 file changed, 44 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..4a7da31 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,9 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Similar to Kafka 0.9 Consumer, the only change is that Kafka 2.x use 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +179,50 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (24bed39 -> 95832d0)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 24bed39  Adding kafka 2.0 doc for using simple consumer
 new 95832d0  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (24bed39)
\
 N -- N -- N   refs/heads/kafka2.0_doc (95832d0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 95832d037b42c6b13e06c89fd199afddad533a07
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 45 +++--
 1 file changed, 43 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..fb98c9d 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,8 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+Similar to Kafka 0.9 Consumer, below is a sample ``streamConfigs`` used to 
create a realtime table with Kafka Stream(High) level consumer.
+The only change is that Kafka 2.x use 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +178,50 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka_2.0 deleted (was 1ad264d)

2019-08-02 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka_2.0
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was 1ad264d  Adding more docs

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 24bed398b104492ebe58c402195a4fc067ca995e
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 42 +-
 1 file changed, 41 insertions(+), 1 deletion(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..8a90f59 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -177,10 +177,50 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (59ef0af -> 24bed39)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 59ef0af  Adding kafka 2.0 doc for using simple consumer
 new 24bed39  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (59ef0af)
\
 N -- N -- N   refs/heads/kafka2.0_doc (24bed39)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 82a4d9453bdb96471f4493b8528d6978188cc168
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 51 --
 1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..c7bf9dc 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,11 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+## Use Kafka Stream(High) Level Consumer ##
+
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Kafka 2.x HLC consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +181,53 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+
+## Use Kafka Partition(Low) Level Consumer ##
+
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (973b7c2 -> 82a4d94)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 973b7c2  Adding kafka 2.0 doc for using simple consumer
 new 82a4d94  Adding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (973b7c2)
\
 N -- N -- N   refs/heads/kafka2.0_doc (82a4d94)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 5 +
 1 file changed, 5 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Adding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 811649a95942ea4e09cfb34ca067350df2d9838e
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..b9d0292 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -177,6 +177,45 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+* `replicasPerPartition` under `segmentsConfig` config is required to specify 
table replication.
+* `stream.kafka.consumer.type` should be specified as `simple` to use 
partition level consumer.
+* `stream.kafka.zk.broker.url` and `stream.kafka.broker.list` are required 
under `tableIndexConfig.streamConfigs` to provide kafka related information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: dding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit c4dceab933ac2529ad67cc76ff12a9fb5d161b3c
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

dding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 51 --
 1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..b07c38b 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,11 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+- **Use Kafka Stream(High) Level Consumer**
+
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Kafka 2.x HLC consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +181,53 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+
+- **Use Kafka Partition(Low) Level Consumer**
+
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch kafka2.0_doc updated (82a4d94 -> f7a1bd7)

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 82a4d94  Adding kafka 2.0 doc for using simple consumer
 new f7a1bd7  dding kafka 2.0 doc for using simple consumer

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (82a4d94)
\
 N -- N -- N   refs/heads/kafka2.0_doc (f7a1bd7)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/pluggable_streams.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: dding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit f7a1bd7bdbe2dd1c9843e3287e657d24c67f5506
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

dding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 51 --
 1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..2088dfa 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,11 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+- Use Kafka Stream(High) Level Consumer
+
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Kafka 2.x HLC consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +181,53 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+
+- Use Kafka Partition(Low) Level Consumer
+
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot-site] branch adding_weibo_logo created (now 39e41cd)

2019-08-11 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch adding_weibo_logo
in repository https://gitbox.apache.org/repos/asf/incubator-pinot-site.git.


  at 39e41cd  adding Weibo logo

This branch includes the following new commits:

 new 39e41cd  adding Weibo logo

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot-site] 01/01: adding Weibo logo

2019-08-11 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch adding_weibo_logo
in repository https://gitbox.apache.org/repos/asf/incubator-pinot-site.git

commit 39e41cdf8b0fa5f9f1147062f96407a764ce69f9
Author: Xiang Fu 
AuthorDate: Sun Aug 11 21:32:24 2019 -0700

adding Weibo logo
---
 assets/images/Weibo_Logo.png | Bin 0 -> 9792 bytes
 index.html   |   7 ---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/assets/images/Weibo_Logo.png b/assets/images/Weibo_Logo.png
new file mode 100644
index 000..bc0e6f3
Binary files /dev/null and b/assets/images/Weibo_Logo.png differ
diff --git a/index.html b/index.html
index fdb2724..95ac1ee 100644
--- a/index.html
+++ b/index.html
@@ -174,9 +174,10 @@
 Powered By 
Pinot
 
 http://www.linkedin.com/; style="margin-left: 
100px;" target="_blank">
-http://www.slack.com/; style="margin-left: 
150px;" target="_blank">
-http://www.uber.com/; style="margin-left: 150px;" 
target="_blank">
-https://products.office.com/en-IN/microsoft-teams/group-chat-software/; 
style="margin-left: 150px;" target="_blank">
+http://www.slack.com/; style="margin-left: 
100px;" target="_blank">
+http://www.uber.com/; style="margin-left: 100px;" 
target="_blank">
+https://products.office.com/en-IN/microsoft-teams/group-chat-software/; 
style="margin-left: 100px;" target="_blank">
+https://weibo.com/; style="margin-left: 100px;" 
target="_blank">
 
 
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot-site] branch asf-site updated (c8cc73a -> d5a32ae)

2019-08-11 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-pinot-site.git.


from c8cc73a  Merge pull request #12 from apache/fix-url
 add 39e41cd  adding Weibo logo
 new d5a32ae  Merge pull request #13 from apache/adding_weibo_logo

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 assets/images/Weibo_Logo.png | Bin 0 -> 9792 bytes
 index.html   |   7 ---
 2 files changed, 4 insertions(+), 3 deletions(-)
 create mode 100644 assets/images/Weibo_Logo.png


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot-site] branch adding_weibo_logo deleted (was 39e41cd)

2019-08-11 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch adding_weibo_logo
in repository https://gitbox.apache.org/repos/asf/incubator-pinot-site.git.


 was 39e41cd  adding Weibo logo

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch update_selection_query_1 updated (ecac2e0 -> 905ada5)

2019-08-16 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch update_selection_query_1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard ecac2e0  update column data type cast during inter merge
 add 905ada5  update column data type cast during inter merge

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (ecac2e0)
\
 N -- N -- N   refs/heads/update_selection_query_1 (905ada5)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 .../query/selection/SelectionOperatorUtils.java|  4 ++
 .../selection/SelectionOperatorServiceTest.java| 46 +++---
 2 files changed, 26 insertions(+), 24 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch update_selection_query_1 created (now ecac2e0)

2019-08-16 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch update_selection_query_1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at ecac2e0  update column data type cast during inter merge

This branch includes the following new commits:

 new ecac2e0  update column data type cast during inter merge

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: update column data type cast during inter merge

2019-08-16 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch update_selection_query_1
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit ecac2e0b57e3b583ee7827adc5a432562217a2c6
Author: Xiang Fu 
AuthorDate: Thu Aug 15 22:19:08 2019 -0700

update column data type cast during inter merge
---
 .../pinot/core/operator/transform/TransformBlockDataFetcher.java | 5 +
 .../apache/pinot/core/query/selection/SelectionOperatorUtils.java| 5 -
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/TransformBlockDataFetcher.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/TransformBlockDataFetcher.java
index 54e83be..b5c8d99 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/TransformBlockDataFetcher.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/operator/transform/TransformBlockDataFetcher.java
@@ -199,9 +199,6 @@ class DictionaryBasedSVValueFetcher implements Fetcher {
   }
 
   public Serializable getValue(int docId) {
-if (_dataType.equals(FieldSpec.DataType.BYTES)) {
-  return 
BytesUtils.toHexString(_dictionary.getBytesValue(_dictionaryIds[docId]));
-}
 return (Serializable) _dictionary.get(_dictionaryIds[docId]);
   }
 }
@@ -337,4 +334,4 @@ class DictionaryBasedMVBytesValueFetcher implements Fetcher 
{
 }
 return values;
   }
-}
\ No newline at end of file
+}
diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
index bbfa42f..0fe09a5 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
@@ -39,6 +39,7 @@ import org.apache.pinot.common.request.Selection;
 import org.apache.pinot.common.request.SelectionSort;
 import org.apache.pinot.common.response.ServerInstance;
 import org.apache.pinot.common.response.broker.SelectionResults;
+import org.apache.pinot.common.utils.BytesUtils;
 import org.apache.pinot.common.utils.DataSchema;
 import org.apache.pinot.common.utils.DataTable;
 import org.apache.pinot.core.common.DataSourceMetadata;
@@ -248,9 +249,11 @@ public class SelectionOperatorUtils {
 dataTableBuilder.setColumn(i, ((Number) 
columnValue).doubleValue());
 break;
   case STRING:
-  case BYTES: // BYTES are already converted to String for Selection, 
before reaching this layer.
 dataTableBuilder.setColumn(i, ((String) columnValue));
 break;
+  case BYTES:
+dataTableBuilder.setColumn(i, BytesUtils.toHexString((byte[]) 
columnValue));
+break;
 
   // Multi-value column.
   case INT_ARRAY:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch update_selection_query created (now 39c91cf)

2019-08-15 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch update_selection_query
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at 39c91cf  update column data type cast during inter merge

This branch includes the following new commits:

 new 39c91cf  update column data type cast during inter merge

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: update column data type cast during inter merge

2019-08-15 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch update_selection_query
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 39c91cf7aa6f7bd3d305576474783779f8fc8363
Author: Xiang Fu 
AuthorDate: Thu Aug 15 22:19:08 2019 -0700

update column data type cast during inter merge
---
 .../apache/pinot/core/query/selection/SelectionOperatorUtils.java  | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
 
b/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
index bbfa42f..42d3e64 100644
--- 
a/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
+++ 
b/pinot-core/src/main/java/org/apache/pinot/core/query/selection/SelectionOperatorUtils.java
@@ -39,6 +39,7 @@ import org.apache.pinot.common.request.Selection;
 import org.apache.pinot.common.request.SelectionSort;
 import org.apache.pinot.common.response.ServerInstance;
 import org.apache.pinot.common.response.broker.SelectionResults;
+import org.apache.pinot.common.utils.BytesUtils;
 import org.apache.pinot.common.utils.DataSchema;
 import org.apache.pinot.common.utils.DataTable;
 import org.apache.pinot.core.common.DataSourceMetadata;
@@ -249,7 +250,11 @@ public class SelectionOperatorUtils {
 break;
   case STRING:
   case BYTES: // BYTES are already converted to String for Selection, 
before reaching this layer.
-dataTableBuilder.setColumn(i, ((String) columnValue));
+if (columnValue instanceof byte[]) {
+  dataTableBuilder.setColumn(i, BytesUtils.toHexString((byte[]) 
columnValue));
+} else {
+  dataTableBuilder.setColumn(i, ((String) columnValue));
+}
 break;
 
   // Multi-value column.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: Adding config to use let controller/broker/server to set hostname (#4517)

2019-08-10 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 3236e40  Adding config to use let controller/broker/server to set 
hostname (#4517)
3236e40 is described below

commit 3236e40c2f8d3d603c82e706db41bf2c0c2eaf68
Author: Xiang Fu 
AuthorDate: Sat Aug 10 15:00:54 2019 -0700

Adding config to use let controller/broker/server to set hostname (#4517)

* Adding config to use let controller/broker/server to set hostname

* address comments
---
 .../pinot/broker/broker/helix/HelixBrokerStarter.java|  5 -
 .../org/apache/pinot/common/utils/CommonConstants.java   |  1 +
 .../org/apache/pinot/controller/ControllerStarter.java   | 16 
 .../pinot/server/starter/helix/HelixServerStarter.java   |  6 +-
 4 files changed, 26 insertions(+), 2 deletions(-)

diff --git 
a/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java
 
b/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java
index 57beed3..0bb6d0a 100644
--- 
a/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java
+++ 
b/pinot-broker/src/main/java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java
@@ -52,6 +52,7 @@ import org.apache.pinot.common.config.TagNameUtils;
 import org.apache.pinot.common.metadata.ZKMetadataProvider;
 import org.apache.pinot.common.metrics.BrokerMeter;
 import org.apache.pinot.common.metrics.BrokerMetrics;
+import org.apache.pinot.common.utils.CommonConstants;
 import org.apache.pinot.common.utils.CommonConstants.Broker;
 import org.apache.pinot.common.utils.CommonConstants.Helix;
 import org.apache.pinot.common.utils.NetUtil;
@@ -107,7 +108,9 @@ public class HelixBrokerStarter {
 _zkServers = zkServer.replaceAll("\\s+", "");
 
 if (brokerHost == null) {
-  brokerHost = NetUtil.getHostAddress();
+  brokerHost =
+
_brokerConf.getBoolean(CommonConstants.Helix.SET_INSTANCE_ID_TO_HOSTNAME_KEY, 
false) ? NetUtil
+.getHostnameOrAddress() : NetUtil.getHostAddress();
 }
 _brokerId = _brokerConf.getString(Helix.Instance.INSTANCE_ID_KEY,
 Helix.PREFIX_OF_BROKER_INSTANCE + brokerHost + "_" + _brokerConf
diff --git 
a/pinot-common/src/main/java/org/apache/pinot/common/utils/CommonConstants.java 
b/pinot-common/src/main/java/org/apache/pinot/common/utils/CommonConstants.java
index 386621b..663557e 100644
--- 
a/pinot-common/src/main/java/org/apache/pinot/common/utils/CommonConstants.java
+++ 
b/pinot-common/src/main/java/org/apache/pinot/common/utils/CommonConstants.java
@@ -102,6 +102,7 @@ public class CommonConstants {
 return ServerType.REALTIME;
   }
 }
+public static final String SET_INSTANCE_ID_TO_HOSTNAME_KEY = 
"pinot.set.instance.id.to.hostname";
 
 public static final String KEY_OF_SERVER_NETTY_PORT = 
"pinot.server.netty.port";
 public static final int DEFAULT_SERVER_NETTY_PORT = 8098;
diff --git 
a/pinot-controller/src/main/java/org/apache/pinot/controller/ControllerStarter.java
 
b/pinot-controller/src/main/java/org/apache/pinot/controller/ControllerStarter.java
index 310748b..7c04510 100644
--- 
a/pinot-controller/src/main/java/org/apache/pinot/controller/ControllerStarter.java
+++ 
b/pinot-controller/src/main/java/org/apache/pinot/controller/ControllerStarter.java
@@ -52,6 +52,7 @@ import org.apache.pinot.common.metrics.MetricsHelper;
 import org.apache.pinot.common.metrics.ValidationMetrics;
 import org.apache.pinot.common.segment.fetcher.SegmentFetcherFactory;
 import org.apache.pinot.common.utils.CommonConstants;
+import org.apache.pinot.common.utils.NetUtil;
 import org.apache.pinot.common.utils.ServiceStatus;
 import org.apache.pinot.controller.api.ControllerAdminApiApplication;
 import org.apache.pinot.controller.api.access.AccessControlFactory;
@@ -124,6 +125,7 @@ public class ControllerStarter {
 
   public ControllerStarter(ControllerConf conf) {
 _config = conf;
+inferHostnameIfNeeded(_config);
 setupHelixSystemProperties();
 
 _controllerMode = conf.getControllerMode();
@@ -152,6 +154,20 @@ public class ControllerStarter {
 }
   }
 
+  private void inferHostnameIfNeeded(ControllerConf config) {
+if (config.getControllerHost() == null) {
+  if 
(config.getBoolean(CommonConstants.Helix.SET_INSTANCE_ID_TO_HOSTNAME_KEY, 
false)) {
+final String inferredHostname = NetUtil.getHostnameOrAddress();
+if (inferredHostname != null) {
+  config.setControllerHost(inferredHostname);
+} else {
+  throw new RuntimeException(
+  "Failed to infer controller hostname, please set controller 
instanceId explicitly in config file.");
+}
+  }
+}
+  }
+
   pri

[incubator-pinot] branch prefer_host_name_over_ip deleted (was 6fbacb2)

2019-08-10 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch prefer_host_name_over_ip
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 was 6fbacb2  address comments

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch fix_combine_node_issue created (now 867dd66)

2019-08-10 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch fix_combine_node_issue
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


  at 867dd66  Make MAX_PLAN_THREADS >= 1

This branch includes the following new commits:

 new 867dd66  Make MAX_PLAN_THREADS >= 1

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: Make MAX_PLAN_THREADS >= 1

2019-08-10 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch fix_combine_node_issue
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 867dd66facea5ec246438d6bdcac8f759dde9617
Author: Xiang Fu 
AuthorDate: Sat Aug 10 02:50:46 2019 -0700

Make MAX_PLAN_THREADS >= 1
---
 .../src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git 
a/pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java 
b/pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
index c0cce5a..3cf8da7 100644
--- a/pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
+++ b/pinot-core/src/main/java/org/apache/pinot/core/plan/CombinePlanNode.java
@@ -39,7 +39,8 @@ import org.slf4j.LoggerFactory;
 public class CombinePlanNode implements PlanNode {
   private static final Logger LOGGER = 
LoggerFactory.getLogger(CombinePlanNode.class);
 
-  private static final int MAX_PLAN_THREADS = Math.min(10, (int) 
(Runtime.getRuntime().availableProcessors() * .5));
+  private static final int MAX_PLAN_THREADS =
+  Math.max(1, Math.min(10, (int) 
(Runtime.getRuntime().availableProcessors() * .5)));
   private static final int MIN_TASKS_PER_THREAD = 10;
   private static final int TIME_OUT_IN_MILLISECONDS_FOR_PARALLEL_RUN = 10_000;
 


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch prefer_host_name_over_ip updated (9223af0 -> 6fbacb2)

2019-08-10 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch prefer_host_name_over_ip
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from 9223af0  Adding config to use let controller/broker/server to set 
hostname
 add 6fbacb2  address comments

No new revisions were added by this update.

Summary of changes:
 .../java/org/apache/pinot/broker/broker/helix/HelixBrokerStarter.java   | 2 +-
 .../src/main/java/org/apache/pinot/common/utils/CommonConstants.java| 2 +-
 .../src/main/java/org/apache/pinot/controller/ControllerStarter.java| 2 +-
 .../java/org/apache/pinot/server/starter/helix/HelixServerStarter.java  | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



  1   2   3   4   5   6   7   8   9   10   >