[ 
https://issues.apache.org/jira/browse/SPARK-17972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Lian updated SPARK-17972:
-------------------------------
    Description: 
The following Spark shell snippet creates a series of query plans that grow 
exponentially. The {{i}}-th plan is created using 4 *cached* copies of the {{i 
- 1}}-th plan.

{code}
(0 until 6).foldLeft(Seq(1, 2, 3).toDS) { (plan, iteration) =>
  val start = System.currentTimeMillis()
  val result = plan.join(plan, "value").join(plan, "value").join(plan, 
"value").join(plan, "value")
  result.cache()
  System.out.println(s"Iteration $iteration takes time 
${System.currentTimeMillis() - start} ms")
  result.as[Int]
}
{code}

We can see that although all plans are cached, the query planning time still 
grows exponentially and quickly becomes unbearable.

{noformat}
Iteration 0 takes time 9 ms
Iteration 1 takes time 19 ms
Iteration 2 takes time 61 ms
Iteration 3 takes time 219 ms
Iteration 4 takes time 830 ms
Iteration 5 takes time 4080 ms
{noformat}

Similar scenarios can be found in iterative ML code and significantly affects 
usability.

This issue can be fixed by introducing a {{checkpoint()}} method for 
{{Dataset}} that truncates both the query plan and the lineage of the 
underlying RDD.

  was:
The following Spark shell snippet creates a series of query plans that grow 
exponentially. The {{i}}-th plan is created using 4 *cached* copies of the {{i 
- 1}}-th plan.

{code}
(0 until 6).foldLeft(Seq(1, 2, 3).toDS) { (plan, iteration) =>
  val start = System.currentTimeMillis()
  val result = plan.join(plan, "value").join(plan, "value").join(plan, 
"value").join(plan, "value")
  result.cache()
  System.out.println(s"Iteration $iteration takes time 
${System.currentTimeMillis() - start} ms")
  result.as[Int]
}
{code}

We can see that although all plans are cached, the query planning time still 
grows exponentially and quickly becomes unbearable.

{noformat}
Iteration 0 takes time 9 ms
Iteration 1 takes time 19 ms
Iteration 2 takes time 61 ms
Iteration 3 takes time 219 ms
Iteration 4 takes time 830 ms
Iteration 5 takes time 4080 ms
{noformat}

Similar scenarios can be found in iterative ML code and significantly affects 
usability.


> Query planning slows down dramatically for large query plans even when 
> sub-trees are cached
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-17972
>                 URL: https://issues.apache.org/jira/browse/SPARK-17972
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.2, 2.0.1
>            Reporter: Cheng Lian
>            Assignee: Cheng Lian
>             Fix For: 2.1.0
>
>
> The following Spark shell snippet creates a series of query plans that grow 
> exponentially. The {{i}}-th plan is created using 4 *cached* copies of the 
> {{i - 1}}-th plan.
> {code}
> (0 until 6).foldLeft(Seq(1, 2, 3).toDS) { (plan, iteration) =>
>   val start = System.currentTimeMillis()
>   val result = plan.join(plan, "value").join(plan, "value").join(plan, 
> "value").join(plan, "value")
>   result.cache()
>   System.out.println(s"Iteration $iteration takes time 
> ${System.currentTimeMillis() - start} ms")
>   result.as[Int]
> }
> {code}
> We can see that although all plans are cached, the query planning time still 
> grows exponentially and quickly becomes unbearable.
> {noformat}
> Iteration 0 takes time 9 ms
> Iteration 1 takes time 19 ms
> Iteration 2 takes time 61 ms
> Iteration 3 takes time 219 ms
> Iteration 4 takes time 830 ms
> Iteration 5 takes time 4080 ms
> {noformat}
> Similar scenarios can be found in iterative ML code and significantly affects 
> usability.
> This issue can be fixed by introducing a {{checkpoint()}} method for 
> {{Dataset}} that truncates both the query plan and the lineage of the 
> underlying RDD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to