[ 
https://issues.apache.org/jira/browse/SPARK-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Niek Bartholomeus updated SPARK-17872:
--------------------------------------
    Description: 
The following lines where the field index in the tuple used in an aggregate 
function is lower than a field index used in the group by clause fails:
{code}
val testDS = Seq((1, 1, 1, 1)).toDS

// group by field 1 and 3, aggregate on field 2 and 4:
testDS
    .groupByKey { case (level1, level1FigureA, level2, level2FigureB) => 
(level1, level2) }
    .agg((sum($"_2" * $"_4")).as[Double])
    .collect
{code}

Error message:
{code}
org.apache.spark.sql.AnalysisException: Reference '_2' is ambiguous, could be: 
_2#562, _2#569.;
  at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:264)
  at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveChildren(LogicalPlan.scala:148)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
  at 
org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:48)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:604)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:600)
  at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
  at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
  at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
{code}

While the following code - where the aggregate field indices are all higher 
than the groupby field indices - works fine:
{code}
testDS
    .map { case (level1, level1FigureA, level2, level2FigureB) => (level1, 
level2, level1FigureA, level2FigureB) }
    .groupByKey { case  (level1, level2, level1FigureA, level2FigureB) => 
(level1, level2) }
    .agg((sum($"_3" * $"_4")).as[Double])
    .collect
{code}

  was:
The following lines where the field index in the tuple used in an aggregate 
function is lower than a field index used in the group by clause fails:
{code}
val testDS = Seq((1, 1, 1, 1)).toDS

// group by field one and three, aggregate on field 2:
testDS
    .groupByKey { case (level1, level1FigureA, level2, level2FigureB) => 
(level1, level2) }
    .agg((sum($"_2" * $"_4")).as[Double])
    .collect
{code}

Error message:
{code}
org.apache.spark.sql.AnalysisException: Reference '_2' is ambiguous, could be: 
_2#562, _2#569.;
  at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:264)
  at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveChildren(LogicalPlan.scala:148)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
  at 
org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:48)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:604)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:600)
  at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
  at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
  at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
{code}

While the following code - where the aggregate field indices are all higher 
than the groupby field indices - works fine:
{code}
testDS
    .map { case (level1, level1FigureA, level2, level2FigureB) => (level1, 
level2, level1FigureA, level2FigureB) }
    .groupByKey { case  (level1, level2, level1FigureA, level2FigureB) => 
(level1, level2) }
    .agg((sum($"_3" * $"_4")).as[Double])
    .collect
{code}


> aggregate function on dataset with tuples grouped by non sequential fields
> --------------------------------------------------------------------------
>
>                 Key: SPARK-17872
>                 URL: https://issues.apache.org/jira/browse/SPARK-17872
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.0.1
>            Reporter: Niek Bartholomeus
>
> The following lines where the field index in the tuple used in an aggregate 
> function is lower than a field index used in the group by clause fails:
> {code}
> val testDS = Seq((1, 1, 1, 1)).toDS
> // group by field 1 and 3, aggregate on field 2 and 4:
> testDS
>     .groupByKey { case (level1, level1FigureA, level2, level2FigureB) => 
> (level1, level2) }
>     .agg((sum($"_2" * $"_4")).as[Double])
>     .collect
> {code}
> Error message:
> {code}
> org.apache.spark.sql.AnalysisException: Reference '_2' is ambiguous, could 
> be: _2#562, _2#569.;
>   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:264)
>   at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveChildren(LogicalPlan.scala:148)
>   at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
>   at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5$$anonfun$31.apply(Analyzer.scala:604)
>   at 
> org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:48)
>   at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:604)
>   at 
> org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:600)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
>   at 
> org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:301)
>   at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
> {code}
> While the following code - where the aggregate field indices are all higher 
> than the groupby field indices - works fine:
> {code}
> testDS
>     .map { case (level1, level1FigureA, level2, level2FigureB) => (level1, 
> level2, level1FigureA, level2FigureB) }
>     .groupByKey { case  (level1, level2, level1FigureA, level2FigureB) => 
> (level1, level2) }
>     .agg((sum($"_3" * $"_4")).as[Double])
>     .collect
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to