Github user yhuai commented on the pull request:

    https://github.com/apache/spark/pull/6799#issuecomment-111760720
  
    @NathanHowell Thank you for working on it! I am wondering if we can keep 
the new behavior and introduce a flag to let users switch back to the old 
behavior? Here is my thoughts on it. Our old behavior ignores the existence of 
empty JSON objects. So, after we get the DataFrame, we actually lose a small 
piece of information about the dataset. Also, if we just change the behavior to 
our old behavior, because we have already released 1.4 this change will go in 
1.5, we actually will change the behavior again. I feel maybe it is good to 
keep the new behavior (we will not discard information) and introduce a flag to 
let user switch back to our spark 1.3's behavior. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to