Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/16672
  
    I am not very sure whether we should follow Hive in this case. The path 
might be wrong or no permission to create such a directory. Thus, it might be 
more user friendly if they can get the error of creating the directory when 
changing the location. cc @cloud-fan @yhuai @hvanhovell 
    
    This PR focues on the write path. How about the read path? What does Hive 
behave when try to select a table whose location/directory is not created? What 
is the behavior of our Spark SQL?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to