[GitHub] [incubator-pinot] apoorvmittal10 opened a new issue #6013: Detection runs multiple times at same time

2020-09-13 Thread GitBox


apoorvmittal10 opened a new issue #6013:
URL: https://github.com/apache/incubator-pinot/issues/6013


   Detection job gets triggered multiple times at same time because multiple 
detection tasks get scheduled. The problem is with detection task creation 
which doesn't factor in concurrent insertions. Scheduler can call same 
detection job creation concurrently depending on free scheduler threads.
   
   Though after taking lock on scheduler job (`@DisallowConcurrentExecution`) 
the problem will persist when multiple scheduler nodes exists in cluster. 
`DetectionPipelineJob` `taskDao` insertion need to have some distributed lock 
semantics.
   
   ```
   INFO  [2020-09-13 18:18:27,548] org.quartz.core.QuartzScheduler: Scheduler 
meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 
'NON_CLUSTERED'
 Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
 NOT STARTED.
 Currently in standby mode.
 Number of jobs executed: 0
 Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
 Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support 
persistence. and is not clustered.
   .
   .
   .
   .
   INFO  [2020-09-13 17:30:07,425] 
org.apache.pinot.thirdeye.detection.alert.DetectionAlertJob: Created 
DETECTION_ALERT task 299 with settings 
TaskDTO[taskType=DETECTION_ALERT,workerId=,jobId=,jobName=DETECTION_ALERT_24,status=WAITING,startTime=0,endTime=0,taskInfo={"detectionAlertConfigId":24},message=,lastModified=,id=299,version=0,updateTime=,createdBy=,updatedBy=]
   INFO  [2020-09-13 17:30:07,928] 
org.apache.pinot.thirdeye.detection.alert.DetectionAlertJob: Created 
DETECTION_ALERT task 300 with settings 
TaskDTO[taskType=DETECTION_ALERT,workerId=,jobId=,jobName=DETECTION_ALERT_24,status=WAITING,startTime=0,endTime=0,taskInfo={"detectionAlertConfigId":24},message=,lastModified=,id=300,version=0,updateTime=,createdBy=,updatedBy=]
   INFO  [2020-09-13 17:30:08,647] 
org.apache.pinot.thirdeye.detection.DetectionPipelineJob: Created DETECTION 
task 
TaskDTO[taskType=DETECTION,workerId=,jobId=,jobName=DETECTION_23,status=WAITING,startTime=0,endTime=0,taskInfo={"configId":23,"start":1600018092445,"end":1600018201552,"online":false,"detectionConfigBean":null},message=,lastModified=,id=301,version=0,updateTime=,createdBy=,updatedBy=]
 with taskId 301
   INFO  [2020-09-13 17:30:08,813] 
org.apache.pinot.thirdeye.detection.DetectionPipelineJob: Created DETECTION 
task 
TaskDTO[taskType=DETECTION,workerId=,jobId=,jobName=DETECTION_23,status=WAITING,startTime=0,endTime=0,taskInfo={"configId":23,"start":1600018092445,"end":1600018201980,"online":false,"detectionConfigBean":null},message=,lastModified=,id=302,version=0,updateTime=,createdBy=,updatedBy=]
 with taskId 302
   INFO  [2020-09-13 17:30:08,969] 
org.apache.pinot.thirdeye.detection.alert.DetectionAlertJob: Created 
DETECTION_ALERT task 303 with settings 
TaskDTO[taskType=DETECTION_ALERT,workerId=,jobId=,jobName=DETECTION_ALERT_24,status=WAITING,startTime=0,endTime=0,taskInfo={"detectionAlertConfigId":24},message=,lastModified=,id=303,version=0,updateTime=,createdBy=,updatedBy=]
   INFO  [2020-09-13 17:30:10,384] 
org.apache.pinot.thirdeye.detection.DetectionPipelineJob: Created DETECTION 
task 
TaskDTO[taskType=DETECTION,workerId=,jobId=,jobName=DETECTION_23,status=WAITING,startTime=0,endTime=0,taskInfo={"configId":23,"start":1600018092445,"end":1600018203484,"online":false,"detectionConfigBean":null},message=,lastModified=,id=304,version=0,updateTime=,createdBy=,updatedBy=]
 with taskId 304
   INFO  [2020-09-13 17:30:10,844] 
org.apache.pinot.thirdeye.detection.DetectionPipelineJob: Skip scheduling 
detection task for DETECTION_23 with start time 1600018092445. Task is already 
in the queue.
   INFO  [2020-09-13 17:30:10,942] 
org.apache.pinot.thirdeye.detection.alert.DetectionAlertJob: Created 
DETECTION_ALERT task 305 with settings 
TaskDTO[taskType=DETECTION_ALERT,workerId=,jobId=,jobName=DETECTION_ALERT_24,status=WAITING,startTime=0,endTime=0,taskInfo={"detectionAlertConfigId":24},message=,lastModified=,id=305,version=0,updateTime=,createdBy=,updatedBy=]
   INFO  [2020-09-13 17:30:11,083] 
org.apache.pinot.thirdeye.detection.alert.DetectionAlertJob: Created 
DETECTION_ALERT task 307 with settings 
TaskDTO[taskType=DETECTION_ALERT,workerId=,jobId=,jobName=DETECTION_ALERT_24,status=WAITING,startTime=0,endTime=0,taskInfo={"detectionAlertConfigId":24},message=,lastModified=,id=307,version=0,updateTime=,createdBy=,updatedBy=]
   INFO  [2020-09-13 17:30:11,092] 
org.apache.pinot.thirdeye.detection.DetectionPipelineJob: Created DETECTION 
task 
TaskDTO[taskType=DETECTION,workerId=,jobId=,jobName=DETECTION_23,status=WAITING,startTime=0,endTime=0,taskInfo={"configId":23,"start":1600018092445,"end":1600018204022,"online":false,"detectionConfigBean":null},message=,lastModified=,id=306,version=0,updateTime=,createdBy=,updatedBy=]
 with taskId 306
   ```



[incubator-pinot] branch fixing_superset_docker updated (40fe40b -> a1ed0ef)

2020-09-13 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a change to branch fixing_superset_docker
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


 discard 40fe40b  Fix superset docker image build script
 add a1ed0ef  Fix superset docker image build script

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (40fe40b)
\
 N -- N -- N   refs/heads/fixing_superset_docker (a1ed0ef)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 docker/images/pinot-superset/.dockerignore   |  1 +
 docker/images/pinot-superset/Dockerfile  |  3 ++-
 docker/images/pinot-superset/requirements-db.txt | 20 
 3 files changed, 23 insertions(+), 1 deletion(-)
 create mode 100644 docker/images/pinot-superset/requirements-db.txt


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: Update pinot helm to adding custom configs and update the jvm default configs (#6011)

2020-09-13 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 5f309cf  Update pinot helm to adding custom configs and update the jvm 
default configs (#6011)
5f309cf is described below

commit 5f309cff34da802616d92ed361284d95b35ed9b9
Author: Xiang Fu 
AuthorDate: Sun Sep 13 15:53:28 2020 -0700

Update pinot helm to adding custom configs and update the jvm default 
configs (#6011)
---
 kubernetes/helm/pinot/README.md |  9 ++---
 .../helm/pinot/templates/broker/configmap.yaml  |  2 +-
 .../helm/pinot/templates/controller/configmap.yaml  |  2 +-
 .../helm/pinot/templates/server/configmap.yaml  |  2 +-
 kubernetes/helm/pinot/values.yaml   | 21 ++---
 5 files changed, 27 insertions(+), 9 deletions(-)

diff --git a/kubernetes/helm/pinot/README.md b/kubernetes/helm/pinot/README.md
index 12e7ab7..24ea06a 100644
--- a/kubernetes/helm/pinot/README.md
+++ b/kubernetes/helm/pinot/README.md
@@ -307,7 +307,7 @@ following configurable parameters:
 | `controller.persistence.size`  | Size of data volume 

   | `1G`   
|
 | `controller.persistence.mountPath` | Mount path of controller 
data volume 
  | 
`/var/pinot/controller/data`   |
 | `controller.persistence.storageClass`  | Storage class of backing 
PVC 
  | `""`
   |
-| `controller.jvmOpts`   | Pinot Controller JVM 
Options 
  | 
`-Xms256M -Xmx1G`  |
+| `controller.jvmOpts`   | Pinot Controller JVM 
Options 
  | 
`-Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:+PrintGCDetails 
-XX:+PrintGCDateStamps -XX:+PrintGCApplicationStoppedTime 
-XX:+PrintGCApplicationConcurrentTime 
-Xloggc:/opt/pinot/gc-pinot-controller.log` 
 |
 | `controller.log4j2ConfFile`| Pinot Controller log4j2 
configuration file  
   | 
`/opt/pinot/conf/pinot-controller-log4j2.xml`  |
 | `controller.pluginsDir`| Pinot Controller plugins 
directory   
  | 
`/opt/pinot/plugins`   |
 | `controller.service.port`  | Service Port

   | `9000` 
|
@@ -320,11 +320,12 @@ following configurable parameters:
 | `controller.tolerations`   | List of node tolerations 
for the pods. 
   
| `[]`  
 |
 | `controller.podAnnotations`| Annotations to be added to 
controller pod  
| `{}`  
 |
 | `controller.updateStrategy.type`   | StatefulSet update strategy 
to use. 
   | 
`RollingUpdate`|
+| `controller.extra.configs` | Extra configs append to 
'pinot-controller.conf' file to start Pinot Controller  
   | 

[GitHub] [incubator-pinot] fx19880617 merged pull request #6011: Update pinot helm to allow custom configs

2020-09-13 Thread GitBox


fx19880617 merged pull request #6011:
URL: https://github.com/apache/incubator-pinot/pull/6011


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] Jackie-Jiang commented on a change in pull request #6009: Adjust schema validation logic in AvroIngestionSchemaValidator

2020-09-13 Thread GitBox


Jackie-Jiang commented on a change in pull request #6009:
URL: https://github.com/apache/incubator-pinot/pull/6009#discussion_r487583315



##
File path: 
pinot-plugins/pinot-input-format/pinot-avro-base/src/main/java/org/apache/pinot/plugin/inputformat/avro/AvroIngestionSchemaValidator.java
##
@@ -111,36 +112,56 @@ private void validateSchemas() {
   }
 }
 if (nonNullSchema != null) {
+  avroColumnSchema = nonNullSchema;
   avroColumnType = nonNullSchema.getType();
 }
   }
 
-  if 
(!fieldSpec.getDataType().name().equalsIgnoreCase(avroColumnType.toString())) {
-_dataTypeMismatch.addMismatchReason(String
-.format("The Pinot column: (%s: %s) doesn't match with the column 
(%s: %s) in input %s schema.", columnName,
-fieldSpec.getDataType().name(), avroColumnSchema.getName(), 
avroColumnType.toString(),
-getInputSchemaType()));
-  }
-
   if (fieldSpec.isSingleValueField()) {
+// check data type mismatch
+if 
(!fieldSpec.getDataType().name().equalsIgnoreCase(avroColumnType.toString())) {

Review comment:
   (nit)
   ```suggestion
   if (!fieldSpec.getDataType().name().equals(avroColumnType.name())) {
   ```

##
File path: 
pinot-plugins/pinot-input-format/pinot-avro-base/src/main/java/org/apache/pinot/plugin/inputformat/avro/AvroIngestionSchemaValidator.java
##
@@ -111,36 +112,56 @@ private void validateSchemas() {
   }
 }
 if (nonNullSchema != null) {
+  avroColumnSchema = nonNullSchema;
   avroColumnType = nonNullSchema.getType();
 }
   }
 
-  if 
(!fieldSpec.getDataType().name().equalsIgnoreCase(avroColumnType.toString())) {
-_dataTypeMismatch.addMismatchReason(String
-.format("The Pinot column: (%s: %s) doesn't match with the column 
(%s: %s) in input %s schema.", columnName,
-fieldSpec.getDataType().name(), avroColumnSchema.getName(), 
avroColumnType.toString(),
-getInputSchemaType()));
-  }
-
   if (fieldSpec.isSingleValueField()) {
+// check data type mismatch
+if 
(!fieldSpec.getDataType().name().equalsIgnoreCase(avroColumnType.toString())) {
+  getDataTypeMismatchResult().addMismatchReason(String
+  .format("The Pinot column: (%s: %s) doesn't match with the 
column (%s: %s) in input %s schema.", columnName,
+  fieldSpec.getDataType().name(), avroColumnName, 
avroColumnType.toString(),
+  getInputSchemaType()));
+}
+// check single-value multi-value mismatch
 if (avroColumnType.ordinal() < 
org.apache.avro.Schema.Type.STRING.ordinal()) {

Review comment:
   You might want to check single-value multi-value mismatch first, then 
check data type based on whether they match, or you will always get data type 
mismatch if single-value multi-value does not match

##
File path: 
pinot-plugins/pinot-input-format/pinot-avro-base/src/main/java/org/apache/pinot/plugin/inputformat/avro/AvroIngestionSchemaValidator.java
##
@@ -111,36 +112,56 @@ private void validateSchemas() {
   }
 }
 if (nonNullSchema != null) {
+  avroColumnSchema = nonNullSchema;
   avroColumnType = nonNullSchema.getType();
 }
   }
 
-  if 
(!fieldSpec.getDataType().name().equalsIgnoreCase(avroColumnType.toString())) {
-_dataTypeMismatch.addMismatchReason(String
-.format("The Pinot column: (%s: %s) doesn't match with the column 
(%s: %s) in input %s schema.", columnName,
-fieldSpec.getDataType().name(), avroColumnSchema.getName(), 
avroColumnType.toString(),
-getInputSchemaType()));
-  }
-
   if (fieldSpec.isSingleValueField()) {
+// check data type mismatch
+if 
(!fieldSpec.getDataType().name().equalsIgnoreCase(avroColumnType.toString())) {
+  getDataTypeMismatchResult().addMismatchReason(String
+  .format("The Pinot column: (%s: %s) doesn't match with the 
column (%s: %s) in input %s schema.", columnName,
+  fieldSpec.getDataType().name(), avroColumnName, 
avroColumnType.toString(),

Review comment:
   ```suggestion
 fieldSpec.getDataType().name(), avroColumnName, 
avroColumnType.name(),
   ```

##
File path: 
pinot-plugins/pinot-input-format/pinot-avro-base/src/main/java/org/apache/pinot/plugin/inputformat/avro/AvroIngestionSchemaValidator.java
##
@@ -111,36 +112,56 @@ private void validateSchemas() {
   }
 }
 if (nonNullSchema != null) {
+  avroColumnSchema = nonNullSchema;
   avroColumnType = nonNullSchema.getType();
 }
   }
 
-  if 
(!fieldSpec.getDataType().name().equalsIgnoreCase(avroColumnType.toString())) {
-_dataTypeMismatch.addMismatchReason(String
-

[GitHub] [incubator-pinot] akshayrai merged pull request #6001: [TE] entity anomaly logging for ad-hoc debugging

2020-09-13 Thread GitBox


akshayrai merged pull request #6001:
URL: https://github.com/apache/incubator-pinot/pull/6001


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch master updated: [TE] entity anomaly logging for ad-hoc debugging (#6001)

2020-09-13 Thread akshayrai09
This is an automated email from the ASF dual-hosted git repository.

akshayrai09 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 143f398  [TE] entity anomaly logging for ad-hoc debugging (#6001)
143f398 is described below

commit 143f398fc1a3e23df991c43820049ad3cd1dc730
Author: Akshay Rai 
AuthorDate: Sun Sep 13 14:55:08 2020 -0700

[TE] entity anomaly logging for ad-hoc debugging (#6001)
---
 .../tools/RunAdhocDatabaseQueriesTool.java | 87 --
 1 file changed, 80 insertions(+), 7 deletions(-)

diff --git 
a/thirdeye/thirdeye-pinot/src/test/java/org/apache/pinot/thirdeye/tools/RunAdhocDatabaseQueriesTool.java
 
b/thirdeye/thirdeye-pinot/src/test/java/org/apache/pinot/thirdeye/tools/RunAdhocDatabaseQueriesTool.java
index c2da5df..f8d576e 100644
--- 
a/thirdeye/thirdeye-pinot/src/test/java/org/apache/pinot/thirdeye/tools/RunAdhocDatabaseQueriesTool.java
+++ 
b/thirdeye/thirdeye-pinot/src/test/java/org/apache/pinot/thirdeye/tools/RunAdhocDatabaseQueriesTool.java
@@ -18,6 +18,7 @@ package org.apache.pinot.thirdeye.tools;
 
 import java.util.HashMap;
 import java.util.concurrent.TimeUnit;
+import com.google.common.collect.Multimap;
 import org.apache.pinot.thirdeye.anomaly.task.TaskConstants;
 import org.apache.pinot.thirdeye.constant.AnomalyResultSource;
 import org.apache.pinot.thirdeye.datalayer.bao.AlertConfigManager;
@@ -80,9 +81,22 @@ import java.util.List;
 import java.util.Map;
 
 import java.util.Set;
-import org.apache.commons.collections4.CollectionUtils;
+import org.apache.pinot.thirdeye.datasource.DAORegistry;
+import org.apache.pinot.thirdeye.datasource.ThirdEyeCacheRegistry;
+import org.apache.pinot.thirdeye.datasource.loader.AggregationLoader;
+import org.apache.pinot.thirdeye.datasource.loader.DefaultAggregationLoader;
+import org.apache.pinot.thirdeye.datasource.loader.DefaultTimeSeriesLoader;
+import org.apache.pinot.thirdeye.datasource.loader.TimeSeriesLoader;
+import org.apache.pinot.thirdeye.detection.DefaultDataProvider;
+import org.apache.pinot.thirdeye.detection.DetectionPipelineLoader;
 import 
org.apache.pinot.thirdeye.detection.alert.DetectionAlertFilterRecipients;
-import org.jfree.util.Log;
+import org.apache.pinot.thirdeye.detection.cache.builder.AnomaliesCacheBuilder;
+import 
org.apache.pinot.thirdeye.detection.cache.builder.TimeSeriesCacheBuilder;
+import org.apache.pinot.thirdeye.detection.spi.model.AnomalySlice;
+import org.joda.time.DateTime;
+import org.joda.time.DateTimeZone;
+import org.joda.time.format.DateTimeFormat;
+import org.joda.time.format.DateTimeFormatter;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -111,6 +125,11 @@ public class RunAdhocDatabaseQueriesTool {
   private ClassificationConfigManager classificationConfigDAO;
   private ApplicationManager applicationDAO;
 
+  private static final DateTimeZone TIMEZONE = 
DateTimeZone.forID("America/Los_Angeles");
+  private static final DateTimeFormatter DATE_FORMAT = 
DateTimeFormat.forPattern("-MM-dd");
+  private static final String ENTITY_STATS_TEMPLATE = "#children = {}, 
properties = {}";
+  private static final String ENTITY_TIME_TEMPLATE = "[create {}, start {}, 
end {}]";
+
   public RunAdhocDatabaseQueriesTool(File persistenceFile)
   throws Exception {
 init(persistenceFile);
@@ -692,8 +711,7 @@ public class RunAdhocDatabaseQueriesTool {
 
   // find all recent anomalies (last 14 days) with end time <= 
watermark;
   long lookback = TimeUnit.DAYS.toMillis(14);
-  Predicate predicate = Predicate.AND(
-  Predicate.LE("createTime", clockEntry.getValue()),
+  Predicate predicate = Predicate.AND(Predicate.LE("createTime", 
clockEntry.getValue()),
   Predicate.GT("createTime", clockEntry.getValue() - lookback),
   Predicate.EQ("detectionConfigId", detectionConfigDTO.getId()));
   List anomalies = 
mergedResultDAO.findByPredicate(predicate);
@@ -723,9 +741,65 @@ public class RunAdhocDatabaseQueriesTool {
 }
   }
 
-  public static void main(String[] args) throws Exception {
+  private void printEntityAnomalyDetails(MergedAnomalyResultDTO anomaly, 
String indent, int index) {
+LOG.info("");
+LOG.info("Exploring Entity Anomaly {} with id {}", index, anomaly.getId());
+LOG.info(ENTITY_STATS_TEMPLATE, anomaly.getChildren().size(), 
anomaly.getProperties());
+LOG.info(ENTITY_TIME_TEMPLATE,
+new DateTime(anomaly.getCreatedTime(), TIMEZONE),
+DATE_FORMAT.print(new DateTime(anomaly.getStartTime(), TIMEZONE)),
+DATE_FORMAT.print(new DateTime(anomaly.getEndTime(), TIMEZONE)));
+  }
 
-File persistenceFile = new File("/Users/akrai/persistence-linux.yml");
+  /**
+   * Visualizes the entity anomalies by printing them
+   *
+   * Eg: dq.printEntityAnomalyTrees(158750221, 0, System.currentTimeMillis())
+   *

[incubator-pinot] branch master updated: Zookeeper put api (#5949)

2020-09-13 Thread kishoreg
This is an automated email from the ASF dual-hosted git repository.

kishoreg pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
 new 83598ce  Zookeeper put api (#5949)
83598ce is described below

commit 83598ceca137da00087f728338d1de5fc96ced9d
Author: Kishore Gopalakrishna 
AuthorDate: Sun Sep 13 11:34:54 2020 -0700

Zookeeper put api (#5949)

* Adding api to edit ZK path

* Adding delete api

* Addressing comments
---
 .../api/resources/ZookeeperResource.java   | 62 ++
 .../helix/core/PinotHelixResourceManager.java  | 17 --
 2 files changed, 75 insertions(+), 4 deletions(-)

diff --git 
a/pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
 
b/pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
index 3093052..d367f2c 100644
--- 
a/pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
+++ 
b/pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
@@ -19,6 +19,7 @@
 package org.apache.pinot.controller.api.resources;
 
 import com.fasterxml.jackson.core.JsonProcessingException;
+import com.google.common.base.Charsets;
 import io.swagger.annotations.Api;
 import io.swagger.annotations.ApiOperation;
 import io.swagger.annotations.ApiParam;
@@ -28,13 +29,17 @@ import java.nio.charset.StandardCharsets;
 import java.util.List;
 import java.util.Map;
 import javax.inject.Inject;
+import javax.ws.rs.DELETE;
 import javax.ws.rs.DefaultValue;
 import javax.ws.rs.GET;
+import javax.ws.rs.POST;
+import javax.ws.rs.PUT;
 import javax.ws.rs.Path;
 import javax.ws.rs.Produces;
 import javax.ws.rs.QueryParam;
 import javax.ws.rs.core.MediaType;
 import javax.ws.rs.core.Response;
+import org.apache.helix.AccessOption;
 import org.apache.helix.ZNRecord;
 import org.apache.helix.manager.zk.ZNRecordSerializer;
 import org.apache.pinot.controller.helix.core.PinotHelixResourceManager;
@@ -75,6 +80,63 @@ public class ZookeeperResource {
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")
+  @Produces(MediaType.APPLICATION_JSON)
+  @ApiOperation(value = "Delete the znode at this path")
+  @ApiResponses(value = { //
+  @ApiResponse(code = 200, message = "Success"), //
+  @ApiResponse(code = 404, message = "ZK Path not found"), //
+  @ApiResponse(code = 204, message = "No Content"), //
+  @ApiResponse(code = 500, message = "Internal server error")})
+  public SuccessResponse delete(
+  @ApiParam(value = "Zookeeper Path, must start with /", required = true, 
defaultValue = "/") @QueryParam("path") @DefaultValue("") String path) {
+
+path = validateAndNormalizeZKPath(path);
+
+boolean success = pinotHelixResourceManager.deleteZKPath(path);
+if (success) {
+  return new SuccessResponse("Successfully deleted path: " + path);
+} else {
+  throw new ControllerApplicationException(LOGGER, "Failed to delete path: 
" + path,
+  Response.Status.INTERNAL_SERVER_ERROR);
+}
+  }
+
+  @PUT
+  @Path("/zk/put")
+  @Produces(MediaType.APPLICATION_JSON)
+  @ApiOperation(value = "Update the content of the node")
+  @ApiResponses(value = { //
+  @ApiResponse(code = 200, message = "Success"), //
+  @ApiResponse(code = 404, message = "ZK Path not found"), //
+  @ApiResponse(code = 204, message = "No Content"), //
+  @ApiResponse(code = 500, message = "Internal server error")})
+  public SuccessResponse putData(
+  @ApiParam(value = "Zookeeper Path, must start with /", required = true, 
defaultValue = "/") @QueryParam("path") @DefaultValue("") String path,
+  @ApiParam(value = "Content", required = true) @QueryParam("data") 
@DefaultValue("") String content,
+  @ApiParam(value = "expectedVersion", required = true, defaultValue = 
"-1") @QueryParam("expectedVersion") @DefaultValue("-1") String expectedVersion,
+  @ApiParam(value = "accessOption", required = true, defaultValue = "1") 
@QueryParam("accessOption") @DefaultValue("1") String accessOption) {
+path = validateAndNormalizeZKPath(path);
+ZNRecord record = null;
+if (content != null) {
+  record = (ZNRecord) 
_znRecordSerializer.deserialize(content.getBytes(Charsets.UTF_8));
+}
+try {
+  boolean result = pinotHelixResourceManager
+  .setZKData(path, record, Integer.parseInt(expectedVersion), 
Integer.parseInt(accessOption));
+  if (result) {
+return new SuccessResponse("Successfully Updated path: " + path);
+  } else {
+throw new ControllerApplicationException(LOGGER, "Failed to update 
path: " + path,
+Response.Status.INTERNAL_SERVER_ERROR);
+  }
+} catch (Exception e) {
+  throw new ControllerApplicationException(LOGGER, "Failed to update path: 
" + path,
+  

[GitHub] [incubator-pinot] kishoreg merged pull request #5949: Zookeeper put api

2020-09-13 Thread GitBox


kishoreg merged pull request #5949:
URL: https://github.com/apache/incubator-pinot/pull/5949


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] mayankshriv commented on pull request #5949: Zookeeper put api

2020-09-13 Thread GitBox


mayankshriv commented on pull request #5949:
URL: https://github.com/apache/incubator-pinot/pull/5949#issuecomment-691700375


   @sajjad-moradi You may want to take into account these apis when adding ACL 
checks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] kishoreg commented on a change in pull request #5949: Zookeeper put api

2020-09-13 Thread GitBox


kishoreg commented on a change in pull request #5949:
URL: https://github.com/apache/incubator-pinot/pull/5949#discussion_r487555430



##
File path: 
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
##
@@ -75,6 +80,63 @@ public String getData(
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")
+  @ApiResponses(value = { //
+  @ApiResponse(code = 200, message = "Success"), //
+  @ApiResponse(code = 404, message = "ZK Path not found"), //
+  @ApiResponse(code = 204, message = "No Content"), //
+  @ApiResponse(code = 500, message = "Internal server error")})
+  public SuccessResponse delete(
+  @ApiParam(value = "Zookeeper Path, must start with /", required = true, 
defaultValue = "/") @QueryParam("path") @DefaultValue("") String path) {
+
+path = validateAndNormalizeZKPath(path);
+
+boolean success = pinotHelixResourceManager.deleteZKPath(path);
+if(success) {
+  return new SuccessResponse("Successfully deleted path: " + path);
+} else {
+  throw new ControllerApplicationException(LOGGER, "Failed to delete path: 
" + path,
+  Response.Status.INTERNAL_SERVER_ERROR);
+}
+  }
+
+  @PUT
+  @Path("/zk/put")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")
+  @ApiResponses(value = { //
+  @ApiResponse(code = 200, message = "Success"), //
+  @ApiResponse(code = 404, message = "ZK Path not found"), //
+  @ApiResponse(code = 204, message = "No Content"), //
+  @ApiResponse(code = 500, message = "Internal server error")})
+  public SuccessResponse putData(
+  @ApiParam(value = "Zookeeper Path, must start with /", required = true, 
defaultValue = "/") @QueryParam("path") @DefaultValue("") String path,
+  @ApiParam(value = "Content", required = true) @QueryParam("data") 
@DefaultValue("") String content,
+  @ApiParam(value = "expectedVersion", required = true, defaultValue = 
"-1") @QueryParam("expectedVersion") @DefaultValue("-1") String expectedVersion,
+  @ApiParam(value = "accessOption", required = true, defaultValue = "1") 
@QueryParam("accessOption") @DefaultValue("1") String accessOption) {
+path = validateAndNormalizeZKPath(path);
+ZNRecord record = null;
+if (content != null) {
+  record = (ZNRecord) 
_znRecordSerializer.deserialize(content.getBytes(Charsets.UTF_8));
+}
+try {
+  boolean result = pinotHelixResourceManager

Review comment:
   yes. ZK does allow that. However, the UI will only allow updating the 
leaf node.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] kishoreg commented on a change in pull request #5949: Zookeeper put api

2020-09-13 Thread GitBox


kishoreg commented on a change in pull request #5949:
URL: https://github.com/apache/incubator-pinot/pull/5949#discussion_r487555268



##
File path: 
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
##
@@ -75,6 +80,63 @@ public String getData(
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] kishoreg commented on a change in pull request #5949: Zookeeper put api

2020-09-13 Thread GitBox


kishoreg commented on a change in pull request #5949:
URL: https://github.com/apache/incubator-pinot/pull/5949#discussion_r487555289



##
File path: 
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
##
@@ -75,6 +80,63 @@ public String getData(
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")
+  @ApiResponses(value = { //

Review comment:
   just formatting





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] branch zookeeper-put-api updated (d3e86ad -> 3c2c88a)

2020-09-13 Thread kishoreg
This is an automated email from the ASF dual-hosted git repository.

kishoreg pushed a change to branch zookeeper-put-api
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git.


from d3e86ad  Adding delete api
 add 3c2c88a  Addressing comments

No new revisions were added by this update.

Summary of changes:
 .../pinot/controller/api/resources/ZookeeperResource.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[GitHub] [incubator-pinot] mayankshriv commented on a change in pull request #5949: Zookeeper put api

2020-09-13 Thread GitBox


mayankshriv commented on a change in pull request #5949:
URL: https://github.com/apache/incubator-pinot/pull/5949#discussion_r487550170



##
File path: 
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
##
@@ -75,6 +80,63 @@ public String getData(
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")
+  @ApiResponses(value = { //

Review comment:
   Are the trailing `//` needed for formatting, or some other purpose?

##
File path: 
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
##
@@ -75,6 +80,63 @@ public String getData(
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")
+  @ApiResponses(value = { //
+  @ApiResponse(code = 200, message = "Success"), //
+  @ApiResponse(code = 404, message = "ZK Path not found"), //
+  @ApiResponse(code = 204, message = "No Content"), //
+  @ApiResponse(code = 500, message = "Internal server error")})
+  public SuccessResponse delete(
+  @ApiParam(value = "Zookeeper Path, must start with /", required = true, 
defaultValue = "/") @QueryParam("path") @DefaultValue("") String path) {
+
+path = validateAndNormalizeZKPath(path);
+
+boolean success = pinotHelixResourceManager.deleteZKPath(path);
+if(success) {
+  return new SuccessResponse("Successfully deleted path: " + path);
+} else {
+  throw new ControllerApplicationException(LOGGER, "Failed to delete path: 
" + path,
+  Response.Status.INTERNAL_SERVER_ERROR);
+}
+  }
+
+  @PUT
+  @Path("/zk/put")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")
+  @ApiResponses(value = { //
+  @ApiResponse(code = 200, message = "Success"), //
+  @ApiResponse(code = 404, message = "ZK Path not found"), //
+  @ApiResponse(code = 204, message = "No Content"), //
+  @ApiResponse(code = 500, message = "Internal server error")})
+  public SuccessResponse putData(
+  @ApiParam(value = "Zookeeper Path, must start with /", required = true, 
defaultValue = "/") @QueryParam("path") @DefaultValue("") String path,
+  @ApiParam(value = "Content", required = true) @QueryParam("data") 
@DefaultValue("") String content,
+  @ApiParam(value = "expectedVersion", required = true, defaultValue = 
"-1") @QueryParam("expectedVersion") @DefaultValue("-1") String expectedVersion,
+  @ApiParam(value = "accessOption", required = true, defaultValue = "1") 
@QueryParam("accessOption") @DefaultValue("1") String accessOption) {
+path = validateAndNormalizeZKPath(path);
+ZNRecord record = null;
+if (content != null) {
+  record = (ZNRecord) 
_znRecordSerializer.deserialize(content.getBytes(Charsets.UTF_8));
+}
+try {
+  boolean result = pinotHelixResourceManager

Review comment:
   Will this allow any arbitrary path to be set with any arbitrary value 
(as-in possibly corrupt the cluster state)?

##
File path: 
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
##
@@ -75,6 +80,63 @@ public String getData(
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")

Review comment:
   The `delete` api seems a bit dangerous. Should we protect it to only 
allow deleting certain paths? Seems this api will allow deleting the entire 
cluster? 

##
File path: 
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/ZookeeperResource.java
##
@@ -75,6 +80,63 @@ public String getData(
 return null;
   }
 
+  @DELETE
+  @Path("/zk/delete")
+  @Produces(MediaType.TEXT_PLAIN)
+  @ApiOperation(value = "Get content of the znode")

Review comment:
   `value` should be `delete` and not `Get`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org