[jira] [Updated] (SPARK-29811) Missing persist on oldDataset in ml.RandomForestRegressor.train()

2019-11-09 Thread Aman Omer (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Omer updated SPARK-29811:
--
Parent: SPARK-29818
Issue Type: Sub-task  (was: Improvement)

> Missing persist on oldDataset in ml.RandomForestRegressor.train()
> -
>
> Key: SPARK-29811
> URL: https://issues.apache.org/jira/browse/SPARK-29811
> Project: Spark
>  Issue Type: Sub-task
>  Components: ML
>Affects Versions: 2.4.3
>Reporter: Dong Wang
>Priority: Major
>
> The rdd oldDataset in ml.regression.RandomForestRegressor.train() needs to be 
> persisted, because it used in two actions in RandomForest.run() and 
> oldDataset.first().
> {code:scala}
> override protected def train(
>   dataset: Dataset[_]): RandomForestRegressionModel = instrumented { 
> instr =>
> val categoricalFeatures: Map[Int, Int] =
>   MetadataUtils.getCategoricalFeatures(dataset.schema($(featuresCol)))
> val oldDataset: RDD[LabeledPoint] = extractLabeledPoints(dataset) // 
> Needs to persist
> val strategy =
>   super.getOldStrategy(categoricalFeatures, numClasses = 0, 
> OldAlgo.Regression, getOldImpurity)
> instr.logPipelineStage(this)
> instr.logDataset(dataset)
> instr.logParams(this, labelCol, featuresCol, predictionCol, impurity, 
> numTrees,
>   featureSubsetStrategy, maxDepth, maxBins, maxMemoryInMB, minInfoGain,
>   minInstancesPerNode, seed, subsamplingRate, cacheNodeIds, 
> checkpointInterval)
>// First use oldDataset
> val trees = RandomForest
>   .run(oldDataset, strategy, getNumTrees, getFeatureSubsetStrategy, 
> getSeed, Some(instr))
>   .map(_.asInstanceOf[DecisionTreeRegressionModel])
>// Second use oldDataset
> val numFeatures = oldDataset.first().features.size
> instr.logNamedValue(Instrumentation.loggerTags.numFeatures, numFeatures)
> new RandomForestRegressionModel(uid, trees, numFeatures)
>   }
> {code}
> The same situation exits in ml.classification.RandomForestClassifier.train.
> This issue is reported by our tool CacheCheck, which is used to dynamically 
> detecting persist()/unpersist() api misuses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29811) Missing persist on oldDataset in ml.RandomForestRegressor.train()

2019-11-09 Thread Dong Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Wang updated SPARK-29811:
--
Description: 
The rdd oldDataset in ml.regression.RandomForestRegressor.train() needs to be 
persisted, because it used in two actions in RandomForest.run() and 
oldDataset.first().
{code:scala}
override protected def train(
  dataset: Dataset[_]): RandomForestRegressionModel = instrumented { instr 
=>
val categoricalFeatures: Map[Int, Int] =
  MetadataUtils.getCategoricalFeatures(dataset.schema($(featuresCol)))
val oldDataset: RDD[LabeledPoint] = extractLabeledPoints(dataset) // Needs 
to persist
val strategy =
  super.getOldStrategy(categoricalFeatures, numClasses = 0, 
OldAlgo.Regression, getOldImpurity)

instr.logPipelineStage(this)
instr.logDataset(dataset)
instr.logParams(this, labelCol, featuresCol, predictionCol, impurity, 
numTrees,
  featureSubsetStrategy, maxDepth, maxBins, maxMemoryInMB, minInfoGain,
  minInstancesPerNode, seed, subsamplingRate, cacheNodeIds, 
checkpointInterval)
   // First use oldDataset
val trees = RandomForest
  .run(oldDataset, strategy, getNumTrees, getFeatureSubsetStrategy, 
getSeed, Some(instr))
  .map(_.asInstanceOf[DecisionTreeRegressionModel])
   // Second use oldDataset
val numFeatures = oldDataset.first().features.size
instr.logNamedValue(Instrumentation.loggerTags.numFeatures, numFeatures)
new RandomForestRegressionModel(uid, trees, numFeatures)
  }
{code}

The same situation exits in ml.classification.RandomForestClassifier.train.

This issue is reported by our tool CacheCheck, which is used to dynamically 
detecting persist()/unpersist() api misuses.

  was:
The rdd oldDataset in ml.regression.RandomForestRegressor.train() needs to be 
persisted, because it used in two actions in RandomForest.run() and 
oldDataset.first().
{code:scala}
override protected def train(
  dataset: Dataset[_]): RandomForestRegressionModel = instrumented { instr 
=>
val categoricalFeatures: Map[Int, Int] =
  MetadataUtils.getCategoricalFeatures(dataset.schema($(featuresCol)))
val oldDataset: RDD[LabeledPoint] = extractLabeledPoints(dataset) // Needs 
to persist
val strategy =
  super.getOldStrategy(categoricalFeatures, numClasses = 0, 
OldAlgo.Regression, getOldImpurity)

instr.logPipelineStage(this)
instr.logDataset(dataset)
instr.logParams(this, labelCol, featuresCol, predictionCol, impurity, 
numTrees,
  featureSubsetStrategy, maxDepth, maxBins, maxMemoryInMB, minInfoGain,
  minInstancesPerNode, seed, subsamplingRate, cacheNodeIds, 
checkpointInterval)
   // First use oldDataset
val trees = RandomForest
  .run(oldDataset, strategy, getNumTrees, getFeatureSubsetStrategy, 
getSeed, Some(instr))
  .map(_.asInstanceOf[DecisionTreeRegressionModel])
   // Second use oldDataset
val numFeatures = oldDataset.first().features.size
instr.logNamedValue(Instrumentation.loggerTags.numFeatures, numFeatures)
new RandomForestRegressionModel(uid, trees, numFeatures)
  }
{code}

The same situation exits in ml.classification.RandomForestClassifier.train.
{code:scala}

{code}
This issue is reported by our tool CacheCheck, which is used to dynamically 
detecting persist()/unpersist() api misuses.


> Missing persist on oldDataset in ml.RandomForestRegressor.train()
> -
>
> Key: SPARK-29811
> URL: https://issues.apache.org/jira/browse/SPARK-29811
> Project: Spark
>  Issue Type: Improvement
>  Components: ML
>Affects Versions: 2.4.3
>Reporter: Dong Wang
>Priority: Major
>
> The rdd oldDataset in ml.regression.RandomForestRegressor.train() needs to be 
> persisted, because it used in two actions in RandomForest.run() and 
> oldDataset.first().
> {code:scala}
> override protected def train(
>   dataset: Dataset[_]): RandomForestRegressionModel = instrumented { 
> instr =>
> val categoricalFeatures: Map[Int, Int] =
>   MetadataUtils.getCategoricalFeatures(dataset.schema($(featuresCol)))
> val oldDataset: RDD[LabeledPoint] = extractLabeledPoints(dataset) // 
> Needs to persist
> val strategy =
>   super.getOldStrategy(categoricalFeatures, numClasses = 0, 
> OldAlgo.Regression, getOldImpurity)
> instr.logPipelineStage(this)
> instr.logDataset(dataset)
> instr.logParams(this, labelCol, featuresCol, predictionCol, impurity, 
> numTrees,
>   featureSubsetStrategy, maxDepth, maxBins, maxMemoryInMB, minInfoGain,
>   minInstancesPerNode, seed, subsamplingRate, cacheNodeIds, 
> checkpointInterval)
>// First use oldDataset
> val trees = RandomForest
>   .run(oldDataset, strategy, getNumTrees, getFeatureSubsetStrategy, 
> getSeed, Some(instr))
>   

[jira] [Updated] (SPARK-29811) Missing persist on oldDataset in ml.RandomForestRegressor.train()

2019-11-09 Thread Dong Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Wang updated SPARK-29811:
--
Description: 
The rdd oldDataset in ml.regression.RandomForestRegressor.train() needs to be 
persisted, because it used in two actions in RandomForest.run() and 
oldDataset.first().
{code:scala}
override protected def train(
  dataset: Dataset[_]): RandomForestRegressionModel = instrumented { instr 
=>
val categoricalFeatures: Map[Int, Int] =
  MetadataUtils.getCategoricalFeatures(dataset.schema($(featuresCol)))
val oldDataset: RDD[LabeledPoint] = extractLabeledPoints(dataset) // Needs 
to persist
val strategy =
  super.getOldStrategy(categoricalFeatures, numClasses = 0, 
OldAlgo.Regression, getOldImpurity)

instr.logPipelineStage(this)
instr.logDataset(dataset)
instr.logParams(this, labelCol, featuresCol, predictionCol, impurity, 
numTrees,
  featureSubsetStrategy, maxDepth, maxBins, maxMemoryInMB, minInfoGain,
  minInstancesPerNode, seed, subsamplingRate, cacheNodeIds, 
checkpointInterval)
   // First use oldDataset
val trees = RandomForest
  .run(oldDataset, strategy, getNumTrees, getFeatureSubsetStrategy, 
getSeed, Some(instr))
  .map(_.asInstanceOf[DecisionTreeRegressionModel])
   // Second use oldDataset
val numFeatures = oldDataset.first().features.size
instr.logNamedValue(Instrumentation.loggerTags.numFeatures, numFeatures)
new RandomForestRegressionModel(uid, trees, numFeatures)
  }
{code}

The same situation exits in ml.classification.RandomForestClassifier.train.
{code:scala}

{code}
This issue is reported by our tool CacheCheck, which is used to dynamically 
detecting persist()/unpersist() api misuses.

  was:
The rdd oldDataset in ml.regression.RandomForestRegressor.train() needs to be 
persisted, because it used in two actions in RandomForest.run() and 
oldDataset.first().
{code:scala}
override protected def train(
  dataset: Dataset[_]): RandomForestRegressionModel = instrumented { instr 
=>
val categoricalFeatures: Map[Int, Int] =
  MetadataUtils.getCategoricalFeatures(dataset.schema($(featuresCol)))
val oldDataset: RDD[LabeledPoint] = extractLabeledPoints(dataset) // Needs 
to persist
val strategy =
  super.getOldStrategy(categoricalFeatures, numClasses = 0, 
OldAlgo.Regression, getOldImpurity)

instr.logPipelineStage(this)
instr.logDataset(dataset)
instr.logParams(this, labelCol, featuresCol, predictionCol, impurity, 
numTrees,
  featureSubsetStrategy, maxDepth, maxBins, maxMemoryInMB, minInfoGain,
  minInstancesPerNode, seed, subsamplingRate, cacheNodeIds, 
checkpointInterval)
   // First use oldDataset
val trees = RandomForest
  .run(oldDataset, strategy, getNumTrees, getFeatureSubsetStrategy, 
getSeed, Some(instr))
  .map(_.asInstanceOf[DecisionTreeRegressionModel])
   // Second use oldDataset
val numFeatures = oldDataset.first().features.size
instr.logNamedValue(Instrumentation.loggerTags.numFeatures, numFeatures)
new RandomForestRegressionModel(uid, trees, numFeatures)
  }
{code}

This issue is reported by our tool CacheCheck, which is used to dynamically 
detecting persist()/unpersist() api misuses.


> Missing persist on oldDataset in ml.RandomForestRegressor.train()
> -
>
> Key: SPARK-29811
> URL: https://issues.apache.org/jira/browse/SPARK-29811
> Project: Spark
>  Issue Type: Improvement
>  Components: ML
>Affects Versions: 2.4.3
>Reporter: Dong Wang
>Priority: Major
>
> The rdd oldDataset in ml.regression.RandomForestRegressor.train() needs to be 
> persisted, because it used in two actions in RandomForest.run() and 
> oldDataset.first().
> {code:scala}
> override protected def train(
>   dataset: Dataset[_]): RandomForestRegressionModel = instrumented { 
> instr =>
> val categoricalFeatures: Map[Int, Int] =
>   MetadataUtils.getCategoricalFeatures(dataset.schema($(featuresCol)))
> val oldDataset: RDD[LabeledPoint] = extractLabeledPoints(dataset) // 
> Needs to persist
> val strategy =
>   super.getOldStrategy(categoricalFeatures, numClasses = 0, 
> OldAlgo.Regression, getOldImpurity)
> instr.logPipelineStage(this)
> instr.logDataset(dataset)
> instr.logParams(this, labelCol, featuresCol, predictionCol, impurity, 
> numTrees,
>   featureSubsetStrategy, maxDepth, maxBins, maxMemoryInMB, minInfoGain,
>   minInstancesPerNode, seed, subsamplingRate, cacheNodeIds, 
> checkpointInterval)
>// First use oldDataset
> val trees = RandomForest
>   .run(oldDataset, strategy, getNumTrees, getFeatureSubsetStrategy, 
> getSeed, Some(instr))
>   .map(_.asInstanceOf[DecisionTreeRegressionModel])
>// Second use oldDataset
> val numFeatures =