Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17945
Ah that is a bummer. Lets hold off on that and use a new JIRA to discuss
it. Rest of the changes LGTM pending jenkins, appveyor
---
If your project is set up for it, you can reply to this email
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17945
Can you also retest with devtools after the fix ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17945
I think its a lowercase `t`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17945
Actually one more thing to do is to change the vignettes to use 1 core as
well since they get rebuilt / checked during the CRAN check.
---
If your project is set up for it, you can reply to this
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17945#discussion_r116018439
--- Diff: R/pkg/inst/tests/testthat/jarTest.R ---
@@ -16,7 +16,7 @@
#
library(SparkR)
-sc <- sparkR.session()
+sc <- sparkR.s
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17945#discussion_r115907813
--- Diff: R/pkg/inst/tests/testthat/jarTest.R ---
@@ -16,7 +16,7 @@
#
library(SparkR)
-sc <- sparkR.session()
+sc <- sparkR.s
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17945#discussion_r115907978
--- Diff: R/pkg/inst/tests/testthat/test_Serde.R ---
@@ -17,7 +17,7 @@
context("SerDe functionality")
-sparkSession <- s
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17945#discussion_r115907770
--- Diff: R/pkg/DESCRIPTION ---
@@ -13,6 +13,7 @@ Authors@R: c(person("Shivaram", "Venkataraman", role =
c("aut", "
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17903
LGTM. Thanks @falaki -- BTW is this a problem only on master or should we
also backport this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17816
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17814
cc @tdas @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17815
LGTM. Minor comment
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17815#discussion_r114099455
--- Diff: R/pkg/inst/tests/testthat/test_streaming.R ---
@@ -61,6 +61,7 @@ test_that("read.stream, write.stream, awaitTermination,
stop
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17640
+1 on what @felixcheung said -- It'll be good to have more tests in
test_Serde.R. Other than that the change looks fine
---
If your project is set up for it, you can reply to this email and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17590
I think @HyukjinKwon interpretation is good !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17516
Got it. LGTM. Thanks for explanation. I'm fine with merging this to master !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17516
The test passes even when we run the `R CMD check --as-cran` from a
different directory ? I thought the fix in `branch-2.1` was to get around that
(my understanding could be wrong)
---
If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17516
Don't we also need the skip if cran statement ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17515
LGTM. I think this fix is fine for 2.1 branch (though its a shame as this
was one of the main changes we were trying to get in 2.1 ?)
---
If your project is set up for it, you can reply to this
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17516
So does the current patch pass `R CMD check` when run on the master branch ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17516
Looking at this now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109257813
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -645,16 +645,17 @@ test_that("test tableNames and tables", {
df <- read.
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109258117
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2977,6 +2981,51 @@ test_that("Collect on DataFrame when NAs exists at
the top of a time
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109258347
--- Diff: R/pkg/R/catalog.R ---
@@ -0,0 +1,478 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17483
cc @gatorsmile for any SQL specific inputs
@felixcheung I will take a look at this later today. Meanwhile in the PR
description can you note down what are the new functions being added in
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
Ah we are only changing this on Windows - I see. This is a lower risk
change then. LGTM. Merging this to master, branch-2.1
cc @HyukjinKwon
---
If your project is set up for it, you can
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16330
Had a minor comment on the test case. LGTM otherwise and waiting for Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16330#discussion_r106789593
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2909,6 +2910,30 @@ test_that("Collect on DataFrame when NAs exists at
the top of a time
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
Ok sounds good. Since this touches spark-submit scripts that are shared
across all languages it would be good to get loop in some other reviewers as
well. I can do that once we have the new diff
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16330
Great. I'll take a final look and wait for Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16330
Sounds good to me.
On Mar 16, 2017 04:24, "Felix Cheung" wrote:
> *@felixcheung* commented on this pull request.
> --
>
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
I think echoing an error message earlier than 1 minute would be useful.
What would be the files we'd need to change for that ?
---
If your project is set up for it, you can reply to this
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
Ok. That sounds like a bigger change then. I guess we could leave this out
of branch-2.1 and do this for 2.2
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16330#discussion_r105548957
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2897,6 +2898,27 @@ test_that("Collect on DataFrame when NAs exists at
the top of a time
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16330#discussion_r105545530
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2897,6 +2898,27 @@ test_that("Collect on DataFrame when NAs exists at
the top of a time
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16330#discussion_r105545521
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRDD.scala ---
@@ -127,6 +127,13 @@ private[r] object RRDD {
sparkConf.setExecutorEnv
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16330#discussion_r105545556
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2897,6 +2898,27 @@ test_that("Collect on DataFrame when NAs exists at
the top of a time
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
@felixcheung Any update on this ? Looking through the list of PRs I thought
this might be a good one to add to a CRAN submission
---
If your project is set up for it, you can reply to this email
Github user shivaram closed the pull request at:
https://github.com/apache/spark/pull/16290
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16290
@felixcheung Yeah I think that sounds good. I can close this PR for now and
we can revisit this if we have an issue in the future. Also I guess you will
update #16330 for the derby.log change
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16739
Agree with @jkbradley on this one. We should avoid adding functions that
are completely new in a patch release given that the timing between minor
versions and patch releases aren't that hig
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16290
@gatorsmile @cloud-fan @felixcheung I looked at the SharedState code more
closely and it looks like the only time the warehousePath can be set is when
the initialization of shared state happens
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r103276097
--- Diff: R/pkg/R/sparkR.R ---
@@ -376,6 +377,12 @@ sparkR.session <- function(
overrideEnvs(sparkConfigMap, param
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16330
Its a bit tricky to ask users permission during installation (actually I'm
not sure how we can create such an option ?) -- I think a viable option could
be to do add `logWarning` that shows
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17018
Thanks - will keep an eye out to see if the time out errors are gone.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17018
Yeah asking them for 1.5hr seems like a good first step. We can also see
what fraction of time it takes for build vs. tests. I guess we can't make
builds faster without some effort, but we
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17018
Hmm I think this specific one timed out, so it might not be worth
investigating
https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark/build/824-master/messages
---
If your project is
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17018
Just to clarify what I meant is that there was a similar error fixed due to
Hadoop / SQL issues by @HyukjinKwon in
https://github.com/apache/spark/pull/16927 - I'll see if this happens agai
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16623
Thanks for cc'ing me on this - I think @jkbradley has a good point that we
should be a bit more explicit in discussing when / why we backport changes in
SparkR. While we have not declared
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16982
Wow - this is interesting ! cc @rxin @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16512
At least from the R code perspective this is source code compatible with
existing 2.1, as adding an optional parameter at the end should not break any
existing code. Also I am not sure I would
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16720
LGTM. I patched this again on top of 2.1.0 and `R CMD check --as-cran`
passes now. Merging this to master and branch-2.1
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16927
Thanks @HyukjinKwon for investigating ! LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16689
Ok - I think this sounds good then ! @felixcheung Let me know if you want
me to take a look at the code as well or if not feel free to merge when you
think its ready
---
If your project is set
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16689
@felixcheung @titicaca Just to make sure I understand, collect on timestamp
was getting `c("POSIXct", "POSIXt")` even before this change ?
---
If your project is set up for
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r99975077
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -27,6 +27,9 @@ library(SparkR)
We use default settings in which it runs in local mode. It
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r99975068
--- Diff: R/pkg/inst/tests/testthat/test_utils.R ---
@@ -17,6 +17,9 @@
context("functions in utils.R")
+# Ensure Spark is
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r99456349
--- Diff: R/pkg/inst/tests/testthat/test_utils.R ---
@@ -17,6 +17,9 @@
context("functions in utils.R")
+# Ensure Spark is
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16739
Thanks @felixcheung - I think these changes look good.
cc @gatorsmile / @holdenk for doc changes in SQL, Python
---
If your project is set up for it, you can reply to this email and
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r98838566
--- Diff: R/pkg/inst/tests/testthat/test_utils.R ---
@@ -17,6 +17,9 @@
context("functions in utils.R")
+# Ensure Spark is
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r98837066
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -27,6 +27,9 @@ library(SparkR)
We use default settings in which it runs in local mode. It
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r98836638
--- Diff: R/pkg/inst/tests/testthat/test_utils.R ---
@@ -17,6 +17,9 @@
context("functions in utils.R")
+# Ensure Spark is
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r98833756
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -27,6 +27,9 @@ library(SparkR)
We use default settings in which it runs in local mode. It
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16720#discussion_r98833957
--- Diff: R/pkg/inst/tests/testthat/test_utils.R ---
@@ -17,6 +17,9 @@
context("functions in utils.R")
+# Ensure Spark is
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16739#discussion_r98605173
--- Diff: R/pkg/R/DataFrame.R ---
@@ -680,14 +680,45 @@ setMethod("storageLevel",
storageLevelToString(callJMethod(x@sdf, &qu
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16739#discussion_r98598572
--- Diff: R/pkg/R/DataFrame.R ---
@@ -680,14 +680,45 @@ setMethod("storageLevel",
storageLevelToString(callJMethod(x@sdf, &qu
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16720
I am not sure tests are ever meant to run on a cluster (see the number of
uses of LocalSparkContext in core/src/test/scala) -- The main reason I dont
want to introduce the 'first test' a
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16670
LGTM. Merging this to master, branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16720
Hmm - another fix could be that in the test cases whenever we create
`spark.session` we always pass in `master=local` ?
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16670#discussion_r98267056
--- Diff: R/pkg/R/utils.R ---
@@ -756,12 +756,12 @@ varargsToJProperties <- function(...) {
props
}
-launchScript <- function(
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16670#discussion_r98264755
--- Diff: R/pkg/inst/tests/testthat/test_Windows.R ---
@@ -20,7 +20,7 @@ test_that("sparkJars tag in SparkContext", {
if (.Platfo
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16708
Isnt the dataset immutable ? i.e. the optimizer is called once when the RDD
is materialized and the RDD doesn't change after that ?
---
If your project is set up for it, you can reply to
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16708
@rxin you think this will be confusing as the results might change over
time ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16330
@yhuai Could you take another look at this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16668#discussion_r98093530
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3406,3 +3406,28 @@ setMethod("randomSplit",
}
sapply(sdfs,
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16670#discussion_r98092733
--- Diff: R/pkg/inst/tests/testthat/test_Windows.R ---
@@ -20,7 +20,7 @@ test_that("sparkJars tag in SparkContext", {
if (.Platfo
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16706#discussion_r98072101
--- Diff: core/src/main/scala/org/apache/spark/rpc/RpcEndpointAddress.scala
---
@@ -25,10 +27,11 @@ import org.apache.spark.SparkException
* The
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16668
@felixcheung Why dont we do something simpler where we call the scala
function from R side. i.e. get a handle to the scala DF, call `.rdd ` on it to
get a handle to the scala RDD etc. ? That seems
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16689
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16689
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16670#discussion_r97229573
--- Diff: R/pkg/inst/tests/testthat/test_Windows.R ---
@@ -20,7 +20,7 @@ test_that("sparkJars tag in SparkContext", {
if (.Platfo
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16670#discussion_r97212748
--- Diff: R/pkg/inst/tests/testthat/test_Windows.R ---
@@ -20,7 +20,7 @@ test_that("sparkJars tag in SparkContext", {
if (.Platfo
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16668#discussion_r97209406
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3406,3 +3406,28 @@ setMethod("randomSplit",
}
sapply(sdfs,
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16589
I think its fine to leave it as is now - When we next add some feature or
update the file we can do some refactoring
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16609
If this is just for the storage tab, something like `storageName` could be
enough ? As @felixcheung said R users might not know what an RDD so I'd avoid
introducing that in the name
---
If
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16589#discussion_r96567761
--- Diff: R/pkg/R/install.R ---
@@ -201,14 +221,20 @@ directDownloadTar <- function(mirrorUrl, version,
hadoopVersion, packageName, pa
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16589#discussion_r96289817
--- Diff: R/pkg/R/install.R ---
@@ -54,7 +54,7 @@
#' }
#' @param overwrite If \code{TRUE}, download and overwrite th
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16609
Yeah `name` vs. `names` could be potentially confusing -- Do we have any
alternate suggestions ? As this isn't a very commonly used construct I think it
might be fine to have a different name
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16586
I have no information on this as well. We could file an INFRA ticket if we
wanted the information
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16249#discussion_r96300148
--- Diff: dev/make-distribution.sh ---
@@ -232,14 +232,17 @@ if [ "$MAKE_R" == "true" ]; then
R_PACKAGE_VERSION=`grep Versi
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16249
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16249#discussion_r96299164
--- Diff: dev/make-distribution.sh ---
@@ -232,14 +232,17 @@ if [ "$MAKE_R" == "true" ]; then
R_PACKAGE_VERSION=`grep Versi
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
Hmm thinking more about it isn't it better to have a way to handle
spark-submit not starting or capturing its error message ? i.e. lack of java
could be one of a few reasons why spark-submit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
`spark-submit` eventually finds java from
https://github.com/apache/spark/blob/a115a54399cd4bedb1a5086943a88af6339fbe85/bin/spark-class#L26
and uses the bash builtin `command` - In fact it looks
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16249#discussion_r96281272
--- Diff: R/install-source-package.sh ---
@@ -0,0 +1,51 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16589#discussion_r96278657
--- Diff: R/pkg/R/install.R ---
@@ -201,14 +221,20 @@ directDownloadTar <- function(mirrorUrl, version,
hadoopVersion, packageName, pa
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16589#discussion_r96277861
--- Diff: R/pkg/R/install.R ---
@@ -54,7 +54,7 @@
#' }
#' @param overwrite If \code{TRUE}, download and overwrite th
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16596
I think having a check for Java is a good idea, but I'm not sure if we
should do this inside the install.spark function ? Why not do this before we
call `launchBackend` at
https://githu
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16590
Ok - we can do that in a future PR then. Merging this to master, branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16590
Thanks @felixcheung - LGTM. Just curious: Is there any way to test / verify
this (short of manual testing) ?
---
If your project is set up for it, you can reply to this email and have your
reply
101 - 200 of 2530 matches
Mail list logo