Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19083#discussion_r142020459
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/DataFrameTimeWindowingSuite.scala 
---
    @@ -228,29 +241,35 @@ class DataFrameTimeWindowingSuite extends QueryTest 
with SharedSQLContext with B
       }
     
       test("millisecond precision sliding windows") {
    -    val df = Seq(
    -      ("2016-03-27 09:00:00.41", 3),
    -      ("2016-03-27 09:00:00.62", 6),
    -      ("2016-03-27 09:00:00.715", 8)).toDF("time", "value")
    -    checkAnswer(
    -      df.groupBy(window($"time", "200 milliseconds", "40 milliseconds", "0 
milliseconds"))
    -        .agg(count("*").as("counts"))
    -        .orderBy($"window.start".asc)
    -        .select($"window.start".cast(StringType), 
$"window.end".cast(StringType), $"counts"),
    -      Seq(
    -        Row("2016-03-27 09:00:00.24", "2016-03-27 09:00:00.44", 1),
    -        Row("2016-03-27 09:00:00.28", "2016-03-27 09:00:00.48", 1),
    -        Row("2016-03-27 09:00:00.32", "2016-03-27 09:00:00.52", 1),
    -        Row("2016-03-27 09:00:00.36", "2016-03-27 09:00:00.56", 1),
    -        Row("2016-03-27 09:00:00.4", "2016-03-27 09:00:00.6", 1),
    -        Row("2016-03-27 09:00:00.44", "2016-03-27 09:00:00.64", 1),
    -        Row("2016-03-27 09:00:00.48", "2016-03-27 09:00:00.68", 1),
    -        Row("2016-03-27 09:00:00.52", "2016-03-27 09:00:00.72", 2),
    -        Row("2016-03-27 09:00:00.56", "2016-03-27 09:00:00.76", 2),
    -        Row("2016-03-27 09:00:00.6", "2016-03-27 09:00:00.8", 2),
    -        Row("2016-03-27 09:00:00.64", "2016-03-27 09:00:00.84", 1),
    -        Row("2016-03-27 09:00:00.68", "2016-03-27 09:00:00.88", 1))
    -    )
    +    // In SPARK-21871, we added code to check the actual bytecode size of 
gen'd methods. If the size
    +    // goes over `hugeMethodLimit`, Spark fails to compile the methods and 
the execution also fails
    +    // in a test mode. So, we explicitly turn off whole-stage codegen here.
    +    // This guard can be removed if this issue fixed.
    +    withSQLConf(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key -> "false") {
    --- End diff --
    
    The same here.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to