mgd-hin commented on a change in pull request #1506:
URL: https://github.com/apache/systemds/pull/1506#discussion_r786291010



##########
File path: scripts/builtin/decisionTreePredict.dml
##########
@@ -0,0 +1,216 @@
+#-------------------------------------------------------------
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+#-------------------------------------------------------------
+
+#
+# Builtin script implementing prediction based on classification trees with 
scale features using prediction methods of the
+# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf).
+#
+# INPUT   PARAMETERS:
+# 
---------------------------------------------------------------------------------------------
+# NAME    TYPE             DEFAULT  MEANING
+# 
---------------------------------------------------------------------------------------------
+# Decision tree in format:
+  # Matrix M where each column corresponds to a node in the learned tree and 
each row contains the following information:
+  #  M[1,j]: id of node j (in a complete binary tree)
+  #  M[2,j]: Offset (no. of columns) to left child of j if j is an internal 
node, otherwise 0
+  #  M[3,j]: Feature index of the feature (scale feature id if the feature is 
scale or categorical feature id if the feature is categorical)
+  #    that node j looks at if j is an internal node, otherwise 0
+  #  M[4,j]: Type of the feature that node j looks at if j is an internal 
node: holds the same information as R input vector
+  #  M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, 
otherwise the size of the subset of values
+  #    stored in rows 6,7,... if j is categorical
+  #    If j is a leaf node: number of misclassified samples reaching at node j
+  #  M[6:,j]: If j is an internal node: Threshold the example's feature value 
is compared to is stored at M[6,j] if the feature chosen for j is scale,
+  #     otherwise if the feature chosen for j is categorical rows 6,7,... 
depict the value subset chosen for j
+  #     If j is a leaf node 1 if j is impure and the number of samples at j > 
threshold, otherwise 0
+#
+# strategy String  Prediction strategy, can be one of ["GEMM", "TT", "PTT"], 
referring to "Generic matrix multiplication", 
+#                     "Tree traversal", and "Perfect tree traversal", 
respectively
+# 
-------------------------------------------------------------------------------------------
+# OUTPUT   PARAMETERS:
+# 
---------------------------------------------------------------------------------------------
+# NAME    TYPE             DEFAULT  MEANING
+# 
---------------------------------------------------------------------------------------------
+#      1- Matrix Y containing the predicted labels for X 
+# 
---------------------------------------------------------------------------------------------
+
+m_decisionTreePredict = function(
+    Matrix[Double] M,
+    Matrix[Double] X,
+    String strategy
+) return (Matrix[Double] Y) {
+
+  if (strategy == "TT") {
+    Y = predict_TT(M, X)
+  } 
+  else {
+    print ("No such strategy" + strategy)
+    Y = matrix("0", rows=0, cols=0)
+  }
+}
+
+predict_TT = function (
+    Matrix[Double] M,
+    Matrix[Double] X
+) return (Matrix[Double] Y){
+
+  Y = matrix(0, rows=1, cols=nrow(X))
+  n = ncol(M)
+  tree_depth = ceiling(log(7,2)) # max depth of complete binary tree
+  [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M)
+
+  for (k in 1:nrow(X)){
+    # iterate over every sample in X matrix
+    sample = X[k,]
+    current_node = 1
+    cnt = 1
+    while (cnt < tree_depth){
+      feature_id = as.scalar(N_F[1, current_node])
+      feature = as.scalar(sample[,feature_id]) # select feature from sample 
data
+      threshold = as.scalar(N_T[1, current_node])
+
+      if (feature < threshold){
+        # move on to left child node
+        next_node = as.scalar(N_L[1, current_node])
+      } else {
+        # move on to right child node
+        next_node = as.scalar(N_R[1, current_node])
+      }
+      current_node = next_node
+      cnt +=1
+    }
+
+    class = M[4, current_node]
+    Y[1, k] = class
+  }
+}
+
+createNodeTensors = function(
+  Matrix[Double] M
+  ) return (
+    Matrix[Double] N_L,
+    Matrix[Double] N_R,
+    Matrix[Double] N_F,
+    Matrix[Double] N_T,
+    Matrix[Double] N_C
+    ){
+      N = M[1,] # all tree nodes
+      I = M[2,] # list of node offsets to their left children
+
+      N_L = matrix(0, rows=1, cols=0)
+      N_R = matrix(0, rows=1, cols=0)
+      N_F = matrix(0, rows=1, cols=0)
+      N_T = matrix(0, rows=1, cols=0)
+
+      for (i in 1:ncol(N)){

Review comment:
       Using larger input, the test time is reduced to about 30% of the 
previous implementation. I will therefore resolve this comment as well.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@systemds.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to