Github user rtreffer commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7455#discussion_r34872908
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/parquet/ParquetIOSuite.scala ---
    @@ -107,7 +107,7 @@ class ParquetIOSuiteBase extends QueryTest with 
ParquetTest {
             // Parquet doesn't allow column names with spaces, have to add an 
alias here
             .select($"_1" cast decimal as "dec")
     
    -    for ((precision, scale) <- Seq((5, 2), (1, 0), (1, 1), (18, 10), (18, 
17))) {
    +    for ((precision, scale) <- Seq((5, 2), (1, 0), (1, 1), (18, 10), (18, 
17), (19, 0), (38, 37))) {
    --- End diff --
    
    I would prefer the way you currently wrote it.
    I don't see a point in keeping a "store it in a way that an older version 
can read" flag. You should always try a new version and then use it for real 
storage. And reading files written by old spark version will always be possible.
    
    PS: solved the test thing. It looks like spark sbt somehow managed to use a 
local 2.9.6 scalac 0.o


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to