[GitHub] [helix] i3wangyi commented on a change in pull request #417: Move partition health check method into dataAccessor layer

2019-08-19 Thread GitBox
i3wangyi commented on a change in pull request #417: Move partition health 
check method into dataAccessor layer
URL: https://github.com/apache/helix/pull/417#discussion_r315478375
 
 

 ##
 File path: 
helix-rest/src/main/java/org/apache/helix/rest/common/HelixDataAccessorWrapper.java
 ##
 @@ -1,24 +1,144 @@
 package org.apache.helix.rest.common;
 
-import org.apache.helix.HelixProperty;
-import org.apache.helix.PropertyKey;
-import org.apache.helix.manager.zk.ZKHelixDataAccessor;
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
 
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.stream.Collectors;
+
+import org.apache.helix.HelixProperty;
+import org.apache.helix.PropertyKey;
+import org.apache.helix.ZNRecord;
+import org.apache.helix.manager.zk.ZKHelixDataAccessor;
+import org.apache.helix.model.RESTConfig;
+import org.apache.helix.rest.client.CustomRestClient;
+import org.apache.helix.rest.client.CustomRestClientFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This is a wrapper for {@link ZKHelixDataAccessor} that caches the result of 
the batch reads it
  * performs.
  * Note that the usage of this object is valid for one REST request.
  */
 public class HelixDataAccessorWrapper extends ZKHelixDataAccessor {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HelixDataAccessorWrapper.class);
+  private static final ExecutorService POOL = Executors.newCachedThreadPool();
+
+  static final String PARTITION_HEALTH_KEY = "PARTITION_HEALTH";
+  static final String IS_HEALTHY_KEY = "IS_HEALTHY";
+  static final String EXPIRY_KEY = "EXPIRE";
+
   private final Map _propertyCache = new 
HashMap<>();
   private final Map> _batchNameCache = new 
HashMap<>();
+  protected CustomRestClient _restClient;
 
   public HelixDataAccessorWrapper(ZKHelixDataAccessor dataAccessor) {
 super(dataAccessor);
+_restClient = CustomRestClientFactory.get();
+  }
+
+  public Map> 
getAllPartitionsHealthOnLiveInstance(RESTConfig restConfig,
+   
 Map customPayLoads) {
+// Only checks the instances are online with valid reports
+List liveInstances = getChildNames(keyBuilder().liveInstances());
+// Make a parallel batch call for getting all healthreports from ZK.
+List zkHealthReports = getProperty(liveInstances.stream()
+.map(instance -> keyBuilder().healthReport(instance, 
PARTITION_HEALTH_KEY))
+.collect(Collectors.toList()), false);
+Map>> parallelTasks = new HashMap<>();
+for (int i = 0; i < liveInstances.size(); i++) {
+  String liveInstance = liveInstances.get(i);
+  Optional maybeHealthRecord = zkHealthReports != null && 
zkHealthReports.size() > i
 
 Review comment:
   'getProperty' method was not a very good design but it's already there. I 
may have done redundant checks to be extra safe, it won't break the 
functionality. If I'm asking 6 instances' records, it returns {null, record1, 
null, null, record, null}, it will also work. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@helix.apache.org
For additional commands, e-mail: reviews-h...@helix.apache.org



[GitHub] [helix] i3wangyi commented on a change in pull request #417: Move partition health check method into dataAccessor layer

2019-08-19 Thread GitBox
i3wangyi commented on a change in pull request #417: Move partition health 
check method into dataAccessor layer
URL: https://github.com/apache/helix/pull/417#discussion_r315426751
 
 

 ##
 File path: 
helix-rest/src/main/java/org/apache/helix/rest/common/HelixDataAccessorWrapper.java
 ##
 @@ -1,24 +1,144 @@
 package org.apache.helix.rest.common;
 
-import org.apache.helix.HelixProperty;
-import org.apache.helix.PropertyKey;
-import org.apache.helix.manager.zk.ZKHelixDataAccessor;
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
 
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.stream.Collectors;
+
+import org.apache.helix.HelixProperty;
+import org.apache.helix.PropertyKey;
+import org.apache.helix.ZNRecord;
+import org.apache.helix.manager.zk.ZKHelixDataAccessor;
+import org.apache.helix.model.RESTConfig;
+import org.apache.helix.rest.client.CustomRestClient;
+import org.apache.helix.rest.client.CustomRestClientFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This is a wrapper for {@link ZKHelixDataAccessor} that caches the result of 
the batch reads it
  * performs.
  * Note that the usage of this object is valid for one REST request.
  */
 public class HelixDataAccessorWrapper extends ZKHelixDataAccessor {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HelixDataAccessorWrapper.class);
+  private static final ExecutorService POOL = Executors.newCachedThreadPool();
+
+  static final String PARTITION_HEALTH_KEY = "PARTITION_HEALTH";
+  static final String IS_HEALTHY_KEY = "IS_HEALTHY";
+  static final String EXPIRY_KEY = "EXPIRE";
+
   private final Map _propertyCache = new 
HashMap<>();
   private final Map> _batchNameCache = new 
HashMap<>();
+  protected CustomRestClient _restClient;
 
   public HelixDataAccessorWrapper(ZKHelixDataAccessor dataAccessor) {
 super(dataAccessor);
+_restClient = CustomRestClientFactory.get();
+  }
+
+  public Map> 
getAllPartitionsHealthOnLiveInstance(RESTConfig restConfig,
+   
 Map customPayLoads) {
+// Only checks the instances are online with valid reports
+List liveInstances = getChildNames(keyBuilder().liveInstances());
+// Make a parallel batch call for getting all healthreports from ZK.
+List zkHealthReports = getProperty(liveInstances.stream()
+.map(instance -> keyBuilder().healthReport(instance, 
PARTITION_HEALTH_KEY))
+.collect(Collectors.toList()), false);
+Map>> parallelTasks = new HashMap<>();
+for (int i = 0; i < liveInstances.size(); i++) {
+  String liveInstance = liveInstances.get(i);
+  Optional maybeHealthRecord = zkHealthReports != null && 
zkHealthReports.size() > i
 
 Review comment:
   The **getProperty** method alone is deprecated, instead I used 
`getProperty(xxx, boolean throwException)`.
   `List zkHealthReports` doesn't guarantee or doesn't 
explicitly say that it will always be of equal size of input parameters. E.g, 
I'd like to get 10 instances' healthReports, but the method only returns 5 
objects. That's why I checked `zkHealthReports.size() > i` to avoid array index 
of bounds issue.
   To be extra safe, 
   It's possible the `zkHealthReports' is null, 'zkHealthReports' has less 
items than live instances parameter and each object in `zkHealthReports' is 
also possible null. All of these are covered in the ternary operator statement.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To 

[GitHub] [helix] i3wangyi commented on a change in pull request #417: Move partition health check method into dataAccessor layer

2019-08-16 Thread GitBox
i3wangyi commented on a change in pull request #417: Move partition health 
check method into dataAccessor layer
URL: https://github.com/apache/helix/pull/417#discussion_r314912447
 
 

 ##
 File path: 
helix-rest/src/main/java/org/apache/helix/rest/common/HelixDataAccessorWrapper.java
 ##
 @@ -1,24 +1,158 @@
 package org.apache.helix.rest.common;
 
-import org.apache.helix.HelixProperty;
-import org.apache.helix.PropertyKey;
-import org.apache.helix.manager.zk.ZKHelixDataAccessor;
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
 
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.stream.Collectors;
+
+import org.apache.helix.HelixProperty;
+import org.apache.helix.PropertyKey;
+import org.apache.helix.ZNRecord;
+import org.apache.helix.manager.zk.ZKHelixDataAccessor;
+import org.apache.helix.model.ExternalView;
+import org.apache.helix.model.RESTConfig;
+import org.apache.helix.rest.client.CustomRestClient;
+import org.apache.helix.rest.client.CustomRestClientFactory;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 /**
  * This is a wrapper for {@link ZKHelixDataAccessor} that caches the result of 
the batch reads it
  * performs.
  * Note that the usage of this object is valid for one REST request.
  */
 public class HelixDataAccessorWrapper extends ZKHelixDataAccessor {
+  private static final Logger LOG = 
LoggerFactory.getLogger(HelixDataAccessorWrapper.class);
+  private static final ExecutorService POOL = Executors.newCachedThreadPool();
+
+  private static final String PARTITION_HEALTH_KEY = "PARTITION_HEALTH";
+  private static final String IS_HEALTHY_KEY = "IS_HEALTHY";
+  private static final String EXPIRY_KEY = "EXPIRE";
+
+  private final CustomRestClient _restClient;
   private final Map _propertyCache = new 
HashMap<>();
   private final Map> _batchNameCache = new 
HashMap<>();
 
   public HelixDataAccessorWrapper(ZKHelixDataAccessor dataAccessor) {
 super(dataAccessor);
+_restClient = CustomRestClientFactory.get();
+  }
+
+  HelixDataAccessorWrapper(ZKHelixDataAccessor dataAccessor, CustomRestClient 
client) {
+super(dataAccessor);
+_restClient = client;
+  }
+
+  public List getExternalViews() {
+return getChildNames(keyBuilder().externalViews()).stream()
+.map(externalView -> (ExternalView) 
getProperty(keyBuilder().externalView(externalView)))
+.collect(Collectors.toList());
+  }
+
+  public Map> 
getPartitionHealthOfInstance(RESTConfig restConfig,
+  Map customPayLoads) {
+// Only checks the instances are online with valid reports
+List liveInstances = getChildNames(keyBuilder().liveInstances());
+// Make a parallel batch call for getting all healthreports from ZK.
+List zkHealthReports = getProperty(liveInstances.stream()
+.map(instance -> keyBuilder().healthReport(instance, 
PARTITION_HEALTH_KEY))
+.collect(Collectors.toList()), false);
+Map>> parallelTasks = new HashMap<>();
+for (int i = 0; i < liveInstances.size(); i++) {
+  String liveInstance = liveInstances.get(i);
+  Optional maybeHealthRecord =
+  
Optional.ofNullable(zkHealthReports.get(i)).map(HelixProperty::getRecord);
+  parallelTasks.put(liveInstance, POOL.submit(() -> {
+if (maybeHealthRecord.isPresent()) {
+  return getPartitionsHealth(liveInstance, maybeHealthRecord.get(), 
restConfig,
+  customPayLoads);
+} else {
 
 Review comment:
   It's different than the previous implementation. The private method 
`getPartitionsHealth` only submits queries when it's expired. The else case 
deals with when we don't have any record for the instance, it needs to submit a 
query of empty partitions list.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and 

[GitHub] [helix] i3wangyi commented on a change in pull request #417: Move partition health check method into dataAccessor layer

2019-08-15 Thread GitBox
i3wangyi commented on a change in pull request #417: Move partition health 
check method into dataAccessor layer
URL: https://github.com/apache/helix/pull/417#discussion_r314536956
 
 

 ##
 File path: 
helix-rest/src/main/java/org/apache/helix/rest/server/service/InstanceServiceImpl.java
 ##
 @@ -234,8 +235,7 @@ private StoppableCheck performCustomInstanceCheck(String 
clusterId, String insta
 
   private Map performPartitionsCheck(List 
instances,
   RESTConfig restConfig, Map customPayLoads) {
-//TODO: move the heavy partition health preparation in separate class
-PartitionHealth clusterPartitionsHealth = 
generatePartitionHealthMapFromZK();
+PartitionHealth clusterPartitionsHealth = 
_dataAccessor.getPartitionHealthFromZK();
 // update the health status for those expired partitions on instances
 Map> expiredPartitionsByInstance =
 
 Review comment:
   yeah, I thought about it as well. Ideally yes. The problem is I need to pass 
restconfig/baseurl/instances, a lot of params to the helix data accessor wrapper


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@helix.apache.org
For additional commands, e-mail: reviews-h...@helix.apache.org