Baunsgaard commented on a change in pull request #1506: URL: https://github.com/apache/systemds/pull/1506#discussion_r785312752
########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: Review comment: consider linking to the training method for the description of the M input. Also format it such that the comment sign '#' is the first character. ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, Review comment: please format to not split the arguments in newlines. ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: Review comment: only write ``` # OUTPUT: ``` on this line ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X +) return (Matrix[Double] Y){ + + Y = matrix(0, rows=1, cols=nrow(X)) + n = ncol(M) + tree_depth = ceiling(log(7,2)) # max depth of complete binary tree + [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M) + + for (k in 1:nrow(X)){ + # iterate over every sample in X matrix + sample = X[k,] + current_node = 1 + cnt = 1 + while (cnt < tree_depth){ + feature_id = as.scalar(N_F[1, current_node]) + feature = as.scalar(sample[,feature_id]) # select feature from sample data + threshold = as.scalar(N_T[1, current_node]) + + if (feature < threshold){ + # move on to left child node + next_node = as.scalar(N_L[1, current_node]) + } else { + # move on to right child node + next_node = as.scalar(N_R[1, current_node]) + } + current_node = next_node + cnt +=1 + } + + class = M[4, current_node] + Y[1, k] = class + } +} + +createNodeTensors = function( + Matrix[Double] M + ) return ( + Matrix[Double] N_L, + Matrix[Double] N_R, + Matrix[Double] N_F, Review comment: remove some newlines. ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# Review comment: You are missing docs for X ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X +) return (Matrix[Double] Y){ + + Y = matrix(0, rows=1, cols=nrow(X)) + n = ncol(M) + tree_depth = ceiling(log(7,2)) # max depth of complete binary tree + [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M) + + for (k in 1:nrow(X)){ + # iterate over every sample in X matrix + sample = X[k,] + current_node = 1 + cnt = 1 + while (cnt < tree_depth){ + feature_id = as.scalar(N_F[1, current_node]) + feature = as.scalar(sample[,feature_id]) # select feature from sample data + threshold = as.scalar(N_T[1, current_node]) + + if (feature < threshold){ + # move on to left child node + next_node = as.scalar(N_L[1, current_node]) + } else { + # move on to right child node + next_node = as.scalar(N_R[1, current_node]) + } + current_node = next_node + cnt +=1 + } + + class = M[4, current_node] + Y[1, k] = class + } +} + +createNodeTensors = function( + Matrix[Double] M + ) return ( + Matrix[Double] N_L, + Matrix[Double] N_R, + Matrix[Double] N_F, + Matrix[Double] N_T, + Matrix[Double] N_C + ){ + N = M[1,] # all tree nodes + I = M[2,] # list of node offsets to their left children + + N_L = matrix(0, rows=1, cols=0) + N_R = matrix(0, rows=1, cols=0) + N_F = matrix(0, rows=1, cols=0) + N_T = matrix(0, rows=1, cols=0) + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its left child to the N_L tensor + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + N_L = cbind(N_L, N[1, i+offset]) + } else { + N_L = cbind(N_L, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its right child to the N_R tensor + + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + leftChild = as.scalar(N[1, i+offset]) + rightChild = leftChild + 1 + + if (as.scalar(I[1, leftChild]) == 0 & as.scalar(I[1, rightChild]) != 0 ){ + rightChild = i + } + N_R = cbind(N_R, N[1, rightChild]) + + } else { + N_R = cbind(N_R, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add index of the feature it evaluates + if (as.scalar(M[3,i]) != 0){ + N_F = cbind(N_F, M[3,i]) + + } else { + N_F = cbind(N_F, as.matrix(1)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add the threshold of the feature it evaluates + if (as.scalar(M[6,i]) != 0){ + N_T = cbind(N_T, M[6,i]) + + } else { + N_T = cbind(N_T, as.matrix(0)) + } + } + + unique_classes = matrix(0, rows=1, cols=1) + predicted_labels = M[4,] + zero = FALSE + + for (i in 1: ncol(predicted_labels)){ + if (as.scalar(I[1, i]) == 0){ # for every leaf node + new_class = TRUE + + for (k in 1: ncol(unique_classes)){ # check if leaf node maps to a new class + if (as.scalar(predicted_labels[1, i]) == as.scalar(unique_classes[1,k])) { + new_class = FALSE + } + } + + if (as.scalar(predicted_labels[1, i]) == 0){ + zero = TRUE + } + + if (new_class == TRUE){ + unique_classes = cbind(unique_classes, predicted_labels[1, i]) + } + } + } + + if (zero == FALSE){ + C = unique_classes[1,2:ncol(unique_classes)] + } else { + C = unique_classes[1,] + } + + N_C = matrix(0, rows=length(N), cols= length(C)) + + for (i in 1:ncol(N)){ + + # check if node is leaf node + if (as.scalar(I[1, i])==0){ + + for (k in 1:ncol(C)){ + # add node if it maps to a class + if (as.scalar(M[4, i]) == as.scalar(C[1, k])){ + N_C[i, k] = 1 + } + } + } + } + +} Review comment: add newline in the end of the file. ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X Review comment: remove new lines ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X +) return (Matrix[Double] Y){ + + Y = matrix(0, rows=1, cols=nrow(X)) + n = ncol(M) + tree_depth = ceiling(log(7,2)) # max depth of complete binary tree + [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M) + + for (k in 1:nrow(X)){ + # iterate over every sample in X matrix + sample = X[k,] + current_node = 1 + cnt = 1 + while (cnt < tree_depth){ + feature_id = as.scalar(N_F[1, current_node]) + feature = as.scalar(sample[,feature_id]) # select feature from sample data + threshold = as.scalar(N_T[1, current_node]) + + if (feature < threshold){ + # move on to left child node + next_node = as.scalar(N_L[1, current_node]) + } else { + # move on to right child node + next_node = as.scalar(N_R[1, current_node]) + } + current_node = next_node + cnt +=1 + } + + class = M[4, current_node] + Y[1, k] = class + } +} + +createNodeTensors = function( + Matrix[Double] M + ) return ( + Matrix[Double] N_L, + Matrix[Double] N_R, + Matrix[Double] N_F, + Matrix[Double] N_T, + Matrix[Double] N_C + ){ + N = M[1,] # all tree nodes + I = M[2,] # list of node offsets to their left children + + N_L = matrix(0, rows=1, cols=0) Review comment: i am worried about performance of this implementation of a empty matrix you cbind into. Since N is known from the start you could allocate the correct size from the beginning. ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X +) return (Matrix[Double] Y){ + + Y = matrix(0, rows=1, cols=nrow(X)) + n = ncol(M) + tree_depth = ceiling(log(7,2)) # max depth of complete binary tree + [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M) + + for (k in 1:nrow(X)){ + # iterate over every sample in X matrix + sample = X[k,] + current_node = 1 + cnt = 1 + while (cnt < tree_depth){ + feature_id = as.scalar(N_F[1, current_node]) + feature = as.scalar(sample[,feature_id]) # select feature from sample data + threshold = as.scalar(N_T[1, current_node]) + + if (feature < threshold){ + # move on to left child node + next_node = as.scalar(N_L[1, current_node]) + } else { + # move on to right child node + next_node = as.scalar(N_R[1, current_node]) + } + current_node = next_node + cnt +=1 + } + + class = M[4, current_node] + Y[1, k] = class + } +} + +createNodeTensors = function( + Matrix[Double] M + ) return ( + Matrix[Double] N_L, + Matrix[Double] N_R, + Matrix[Double] N_F, + Matrix[Double] N_T, + Matrix[Double] N_C + ){ + N = M[1,] # all tree nodes + I = M[2,] # list of node offsets to their left children + + N_L = matrix(0, rows=1, cols=0) + N_R = matrix(0, rows=1, cols=0) + N_F = matrix(0, rows=1, cols=0) + N_T = matrix(0, rows=1, cols=0) + + for (i in 1:ncol(N)){ Review comment: you have 4 for loops in a row that goes to N, could these be joined? also, if they can. consider if it is possible to use parfor instead if you already have removed the cbinds. (parfor is simply parallelizing each loop iteration.) ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X +) return (Matrix[Double] Y){ + + Y = matrix(0, rows=1, cols=nrow(X)) + n = ncol(M) + tree_depth = ceiling(log(7,2)) # max depth of complete binary tree + [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M) + + for (k in 1:nrow(X)){ + # iterate over every sample in X matrix + sample = X[k,] + current_node = 1 + cnt = 1 + while (cnt < tree_depth){ + feature_id = as.scalar(N_F[1, current_node]) + feature = as.scalar(sample[,feature_id]) # select feature from sample data + threshold = as.scalar(N_T[1, current_node]) + + if (feature < threshold){ + # move on to left child node + next_node = as.scalar(N_L[1, current_node]) + } else { + # move on to right child node + next_node = as.scalar(N_R[1, current_node]) + } + current_node = next_node + cnt +=1 + } + + class = M[4, current_node] + Y[1, k] = class + } +} + +createNodeTensors = function( + Matrix[Double] M + ) return ( + Matrix[Double] N_L, + Matrix[Double] N_R, + Matrix[Double] N_F, + Matrix[Double] N_T, + Matrix[Double] N_C + ){ + N = M[1,] # all tree nodes + I = M[2,] # list of node offsets to their left children + + N_L = matrix(0, rows=1, cols=0) + N_R = matrix(0, rows=1, cols=0) + N_F = matrix(0, rows=1, cols=0) + N_T = matrix(0, rows=1, cols=0) + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its left child to the N_L tensor + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + N_L = cbind(N_L, N[1, i+offset]) + } else { + N_L = cbind(N_L, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its right child to the N_R tensor + + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + leftChild = as.scalar(N[1, i+offset]) + rightChild = leftChild + 1 + + if (as.scalar(I[1, leftChild]) == 0 & as.scalar(I[1, rightChild]) != 0 ){ + rightChild = i + } + N_R = cbind(N_R, N[1, rightChild]) + + } else { + N_R = cbind(N_R, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add index of the feature it evaluates + if (as.scalar(M[3,i]) != 0){ + N_F = cbind(N_F, M[3,i]) + + } else { + N_F = cbind(N_F, as.matrix(1)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add the threshold of the feature it evaluates + if (as.scalar(M[6,i]) != 0){ + N_T = cbind(N_T, M[6,i]) + + } else { + N_T = cbind(N_T, as.matrix(0)) + } + } + + unique_classes = matrix(0, rows=1, cols=1) Review comment: your indentation suddenly change here, please make the indentaton two spaces inside the function as a base. ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X +) return (Matrix[Double] Y){ + + Y = matrix(0, rows=1, cols=nrow(X)) + n = ncol(M) + tree_depth = ceiling(log(7,2)) # max depth of complete binary tree + [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M) + + for (k in 1:nrow(X)){ + # iterate over every sample in X matrix + sample = X[k,] + current_node = 1 + cnt = 1 + while (cnt < tree_depth){ + feature_id = as.scalar(N_F[1, current_node]) + feature = as.scalar(sample[,feature_id]) # select feature from sample data + threshold = as.scalar(N_T[1, current_node]) + + if (feature < threshold){ + # move on to left child node + next_node = as.scalar(N_L[1, current_node]) + } else { + # move on to right child node + next_node = as.scalar(N_R[1, current_node]) + } + current_node = next_node + cnt +=1 + } + + class = M[4, current_node] + Y[1, k] = class + } +} + +createNodeTensors = function( + Matrix[Double] M + ) return ( + Matrix[Double] N_L, + Matrix[Double] N_R, + Matrix[Double] N_F, + Matrix[Double] N_T, + Matrix[Double] N_C + ){ + N = M[1,] # all tree nodes + I = M[2,] # list of node offsets to their left children + + N_L = matrix(0, rows=1, cols=0) + N_R = matrix(0, rows=1, cols=0) + N_F = matrix(0, rows=1, cols=0) + N_T = matrix(0, rows=1, cols=0) + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its left child to the N_L tensor + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + N_L = cbind(N_L, N[1, i+offset]) + } else { + N_L = cbind(N_L, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its right child to the N_R tensor + + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + leftChild = as.scalar(N[1, i+offset]) + rightChild = leftChild + 1 + + if (as.scalar(I[1, leftChild]) == 0 & as.scalar(I[1, rightChild]) != 0 ){ + rightChild = i + } + N_R = cbind(N_R, N[1, rightChild]) + + } else { + N_R = cbind(N_R, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add index of the feature it evaluates + if (as.scalar(M[3,i]) != 0){ + N_F = cbind(N_F, M[3,i]) + + } else { + N_F = cbind(N_F, as.matrix(1)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add the threshold of the feature it evaluates + if (as.scalar(M[6,i]) != 0){ + N_T = cbind(N_T, M[6,i]) + + } else { + N_T = cbind(N_T, as.matrix(0)) + } + } + + unique_classes = matrix(0, rows=1, cols=1) + predicted_labels = M[4,] + zero = FALSE + + for (i in 1: ncol(predicted_labels)){ Review comment: please consider if you could split this up into two parses, 1, find number of unique. 2. add prediction labels ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING Review comment: The docs should be aligned with this header such that the thing you descripes type is starting with the TYPE etc. See for instance scripts/builtin/msvm.dml ########## File path: scripts/builtin/decisionTreePredict.dml ########## @@ -0,0 +1,216 @@ +#------------------------------------------------------------- +# +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +#------------------------------------------------------------- + +# +# Builtin script implementing prediction based on classification trees with scale features using prediction methods of the +# Hummingbird paper (https://www.usenix.org/system/files/osdi20-nakandala.pdf). +# +# INPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# Decision tree in format: + # Matrix M where each column corresponds to a node in the learned tree and each row contains the following information: + # M[1,j]: id of node j (in a complete binary tree) + # M[2,j]: Offset (no. of columns) to left child of j if j is an internal node, otherwise 0 + # M[3,j]: Feature index of the feature (scale feature id if the feature is scale or categorical feature id if the feature is categorical) + # that node j looks at if j is an internal node, otherwise 0 + # M[4,j]: Type of the feature that node j looks at if j is an internal node: holds the same information as R input vector + # M[5,j]: If j is an internal node: 1 if the feature chosen for j is scale, otherwise the size of the subset of values + # stored in rows 6,7,... if j is categorical + # If j is a leaf node: number of misclassified samples reaching at node j + # M[6:,j]: If j is an internal node: Threshold the example's feature value is compared to is stored at M[6,j] if the feature chosen for j is scale, + # otherwise if the feature chosen for j is categorical rows 6,7,... depict the value subset chosen for j + # If j is a leaf node 1 if j is impure and the number of samples at j > threshold, otherwise 0 +# +# strategy String Prediction strategy, can be one of ["GEMM", "TT", "PTT"], referring to "Generic matrix multiplication", +# "Tree traversal", and "Perfect tree traversal", respectively +# ------------------------------------------------------------------------------------------- +# OUTPUT PARAMETERS: +# --------------------------------------------------------------------------------------------- +# NAME TYPE DEFAULT MEANING +# --------------------------------------------------------------------------------------------- +# 1- Matrix Y containing the predicted labels for X +# --------------------------------------------------------------------------------------------- + +m_decisionTreePredict = function( + Matrix[Double] M, + Matrix[Double] X, + String strategy +) return (Matrix[Double] Y) { + + if (strategy == "TT") { + Y = predict_TT(M, X) + } + else { + print ("No such strategy" + strategy) + Y = matrix("0", rows=0, cols=0) + } +} + +predict_TT = function ( + Matrix[Double] M, + Matrix[Double] X +) return (Matrix[Double] Y){ + + Y = matrix(0, rows=1, cols=nrow(X)) + n = ncol(M) + tree_depth = ceiling(log(7,2)) # max depth of complete binary tree + [N_L, N_R, N_F, N_T, N_C] = createNodeTensors(M) + + for (k in 1:nrow(X)){ + # iterate over every sample in X matrix + sample = X[k,] + current_node = 1 + cnt = 1 + while (cnt < tree_depth){ + feature_id = as.scalar(N_F[1, current_node]) + feature = as.scalar(sample[,feature_id]) # select feature from sample data + threshold = as.scalar(N_T[1, current_node]) + + if (feature < threshold){ + # move on to left child node + next_node = as.scalar(N_L[1, current_node]) + } else { + # move on to right child node + next_node = as.scalar(N_R[1, current_node]) + } + current_node = next_node + cnt +=1 + } + + class = M[4, current_node] + Y[1, k] = class + } +} + +createNodeTensors = function( + Matrix[Double] M + ) return ( + Matrix[Double] N_L, + Matrix[Double] N_R, + Matrix[Double] N_F, + Matrix[Double] N_T, + Matrix[Double] N_C + ){ + N = M[1,] # all tree nodes + I = M[2,] # list of node offsets to their left children + + N_L = matrix(0, rows=1, cols=0) + N_R = matrix(0, rows=1, cols=0) + N_F = matrix(0, rows=1, cols=0) + N_T = matrix(0, rows=1, cols=0) + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its left child to the N_L tensor + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + N_L = cbind(N_L, N[1, i+offset]) + } else { + N_L = cbind(N_L, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add its right child to the N_R tensor + + if (as.scalar(I[1,i]) != 0){ + offset = as.scalar(I[1, i]) + leftChild = as.scalar(N[1, i+offset]) + rightChild = leftChild + 1 + + if (as.scalar(I[1, leftChild]) == 0 & as.scalar(I[1, rightChild]) != 0 ){ + rightChild = i + } + N_R = cbind(N_R, N[1, rightChild]) + + } else { + N_R = cbind(N_R, as.matrix(i)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add index of the feature it evaluates + if (as.scalar(M[3,i]) != 0){ + N_F = cbind(N_F, M[3,i]) + + } else { + N_F = cbind(N_F, as.matrix(1)) + } + } + + for (i in 1:ncol(N)){ + # if the node is an internal node, add the threshold of the feature it evaluates + if (as.scalar(M[6,i]) != 0){ + N_T = cbind(N_T, M[6,i]) + + } else { + N_T = cbind(N_T, as.matrix(0)) + } + } + + unique_classes = matrix(0, rows=1, cols=1) + predicted_labels = M[4,] + zero = FALSE + + for (i in 1: ncol(predicted_labels)){ + if (as.scalar(I[1, i]) == 0){ # for every leaf node + new_class = TRUE + + for (k in 1: ncol(unique_classes)){ # check if leaf node maps to a new class + if (as.scalar(predicted_labels[1, i]) == as.scalar(unique_classes[1,k])) { + new_class = FALSE + } + } + + if (as.scalar(predicted_labels[1, i]) == 0){ + zero = TRUE + } + + if (new_class == TRUE){ + unique_classes = cbind(unique_classes, predicted_labels[1, i]) + } + } + } + + if (zero == FALSE){ + C = unique_classes[1,2:ncol(unique_classes)] + } else { + C = unique_classes[1,] + } + + N_C = matrix(0, rows=length(N), cols= length(C)) + + for (i in 1:ncol(N)){ + + # check if node is leaf node + if (as.scalar(I[1, i])==0){ + Review comment: remove these empty lines in the beginning or end of if statements. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@systemds.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org