Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15231#discussion_r80425776
  
    --- Diff: R/pkg/R/utils.R ---
    @@ -698,6 +698,21 @@ isSparkRShell <- function() {
       grepl(".*shell\\.R$", Sys.getenv("R_PROFILE_USER"), perl = TRUE)
     }
     
    +captureJVMException <- function(e) {
    +  stacktrace <- as.character(e)
    +  if (any(grep("java.lang.IllegalArgumentException: ", stacktrace))) {
    --- End diff --
    
    Thanks! @felixcheung I will address all other comments above. However, for 
this one, I was thinking hard but it seems not easy because we won't know if 
given data source is valid or not in R side first.
    
    I might be able to do this only for internal data sources or known 
databricks datasources such as "redshift" or "xml" like.. creating a map for 
our internal data sources and then checking a path is given or not. However, I 
am not sure if it is a good idea to manage another list for datasources.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to