Hello,
At Present Ignite does convert the predicted values for XGBoost models as
follows:
double res = 0;
for (double prediction : predictions)
res += prediction;
return (1.0 / (1.0 + Math.exp(-res)));
More flexible aggregations for XGBoost are coming in the future.
You can use the model for classification/regression depending on how you
encode it, bearing in mind the aggregation implementation noted above.
Take a look at Ignite's native gradient boosting capabilities:
https://www.gridgain.com/docs/latest/developers-guide/machine-learning/grad-boost
classification:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tree/boosting/GDBOnTreesClassificationTrainerExample.java
regression:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/ml/tree/boosting/GDBOnTreesRegressionTrainerExample.java
GDBModel uses WeightedPredictionsAggregator as the model answer reducer.
This aggregator computes an answer of a meta-model, since "result = bias +
p1w1 + p2w2 + …" where:
pi - answer of i-th model.
wi - weight of model in composition.
GDB uses the mean value of labels for the bias-parameter in the aggregator.
Also take a look at Ignite's Random Forest learner:
https://www.gridgain.com/docs/latest/developers-guide/machine-learning/random-forest
It provides a choice if Aggregators depending on the problem domain:
MeanValuePredictionsAggregator - computes answer of a random forest as mean
value of predictions from all models in the given composition. Often this is
used for regression tasks.
OnMajorityPredictionsAggegator - gets a mode of predictions from all models
in the given composition. This can be useful for a classification task.
NOTE: This aggregator supports multi-classification tasks.
Thanks, Alex
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/