Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126750717
Ah I see - so this affects things like the YARN cluster mode where the
spark-submit script and the driver are run on different machines ? How does
this work out
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7584#issuecomment-126746061
LGTM. Merging this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7742#issuecomment-126517480
Merging this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126531818
Thanks @brkyvz for the update. I had some minor style comments but I don't
quite follow why we had to move the initialization to SparkContext vs. having
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7764#issuecomment-126522020
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7139#discussion_r35937520
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackend.scala ---
@@ -28,6 +28,7 @@ import io.netty.channel.socket.SocketChannel
import
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7764#issuecomment-126524783
@falaki the documentation builds are failing with an error message
```
Error :
/home/jenkins/workspace/SparkPullRequestBuilder@2/R/pkg/man/nrow.Rd
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7139#discussion_r35937603
--- Diff: core/src/main/scala/org/apache/spark/api/r/RUtils.scala ---
@@ -62,4 +64,7 @@ private[spark] object RUtils
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126765707
I see - so the right thing to do is to not build the package on every node
which needs this (driver / executor) but to build it once in the spark-submit
node
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126774354
Well we are unzipping the *source*, building a *binary* and then zipping
the *binary*. so its not redundant work :). The script right now only zips the
`lib/SparkR
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126778965
I think we can have multiple R packages in one zip. If I remember correctly
we just unzip the files in every executor (or spark-submit does this) and we
include
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126801147
Yep that sounds good to me. BTW the way R finds packages is using the
`.libPaths` contents. We explicitly set that to include `lib/` which is why it
can find `SparkR
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7764#discussion_r36016434
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1231,6 +1305,22 @@ setMethod(unionAll,
dataFrame(unioned)
})
+#' @title Union
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7764#issuecomment-126812352
Thanks @falaki for the update. LGTM. I had a minor style fix that I'll
apply during the merge.
---
If your project is set up for it, you can reply to this email
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126815610
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-126815591
@brkyvz I'll take another pass later today and try this on YARN etc. We
should then be able to get it merged hopefully sometime this weekend.
---
If your project
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7701#issuecomment-125565050
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7701#issuecomment-125565367
Thanks @trestletech -- Could you also add a small unit tests for this in
https://github.com/apache/spark/blob/15724fac569258d2a149507d8c767d0de0ae8306/R/pkg/inst/tests
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7701#issuecomment-125689494
LGTM. Thanks @trestletech -- Merging this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7951#issuecomment-127845415
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127811415
BTW @brkyvz can you merge the R changes in the
https://github.com/databricks/sbt-spark-package as well ?
---
If your project is set up for it, you can reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127810784
Yep - I'm out now, but will get back to my computer and merge
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7883#issuecomment-127134483
I don't think we need to do it on every run. If we need a new version of
lintr on Jenkins, then we can ask @JoshRosen or @shaneknapp to install a new
version
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7883#issuecomment-127132591
Also @yu-iskw I am not sure I understand why we need to install `lintr` in
the same library location as SparkR. That will mean that lintr will be
installed on every
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127132992
@brkyvz I just tried this out on a YARN cluster and I actually ran into an
error trying to use the CSV reader package. The command I ran was
```
./bin/sparkR
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/7899
[SPARK-9562] Change reference to amplab/spark-ec2 from mesos/
cc @srowen @pwendell @nchammas
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127350331
I'm on it now. BTW is `sparkGLM` the best external package to try out in
YARN ? or is there a smaller demo package that I can use ?
---
If your project is set up
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127390781
@brkyvz I tried to run this with YARN and found that the code to create the
`zip` might have some problems. It ends up creating a zip file which is
multiple gigabytes
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7764#discussion_r35903215
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1231,6 +1296,24 @@ setMethod(unionAll,
dataFrame(unioned)
})
+setGeneric
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7584#discussion_r35895264
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1384,7 +1384,7 @@ setMethod(saveAsTable,
org.apache.spark.sql.parquet
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7795#issuecomment-126401982
Thanks @yu-iskw -- LGTM. Merging this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7742#issuecomment-126517276
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7139#discussion_r35937631
--- Diff:
yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala ---
@@ -22,6 +22,9 @@ import java.net.URL
import java.util.Properties
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7764#discussion_r35896170
--- Diff: R/pkg/R/DataFrame.R ---
@@ -534,6 +552,53 @@ setMethod(count,
callJMethod(x@sdf, count)
})
+#' @rdname
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7764#discussion_r35897285
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1231,6 +1296,24 @@ setMethod(unionAll,
dataFrame(unioned)
})
+setGeneric
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7742#discussion_r35897965
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackendHandler.scala
---
@@ -148,6 +151,9 @@ private[r] class RBackendHandler(server: RBackend
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7764#discussion_r35896690
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1231,6 +1296,24 @@ setMethod(unionAll,
dataFrame(unioned)
})
+setGeneric
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7750#issuecomment-126414664
LGTM. Merging this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7647#issuecomment-126415405
Not sure I get it -- Does this mean the increased size will only work if we
use the `resizefs` in a script ?
---
If your project is set up for it, you can reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-128844893
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/8085#discussion_r36782862
--- Diff: docs/sparkr.md ---
@@ -210,6 +210,43 @@ head(df)
{% endhighlight %}
/div
+### Model Formulae
+
+SparkR allows
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/8085#discussion_r36782153
--- Diff: docs/sparkr.md ---
@@ -210,6 +210,43 @@ head(df)
{% endhighlight %}
/div
+### Model Formulae
--- End diff --
+1. I
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8085#issuecomment-130012889
Thanks @ericl - I just left some inline comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/8085#discussion_r36782425
--- Diff: docs/sparkr.md ---
@@ -210,6 +210,43 @@ head(df)
{% endhighlight %}
/div
+### Model Formulae
+
--- End diff
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7139#discussion_r36211625
--- Diff: core/src/main/scala/org/apache/spark/api/r/RUtils.scala ---
@@ -19,6 +19,8 @@ package org.apache.spark.api.r
import java.io.File
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7494#issuecomment-127663833
I see - so the problem here is on how to write a unit test that uses UTF-8
and works with `LC_ALL=C` ? One simple thing we might be able to do is to set
the locale
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127664838
@brkyvz I think I figured out the problem with Jenkins -- Since Jenkins
uses SBT it doesn't build the SparkR package at the beginning along with the
rest
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127677811
Thanks @brkyvz -- Changes LGTM. Can you check if @andrewor14 wants to take
another look at the SparkSubmit changes ?
---
If your project is set up for it, you can
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/6599#issuecomment-128448874
@JoshRosen @zsxwing Any thoughts on merging this for 1.5 ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7494#issuecomment-128447851
Can we still use the `iconv` solution with the `System.setlocale` in the
test case ? It doesn't seem right to use `rawToChar` when we are decoding UTF-8
strings
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127740630
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-128744494
LGTM. Thanks @vanzin for the fix. cc @brkyvz
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-128744287
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-128787880
Sigh -- our tests are too flaky.
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8162#issuecomment-130769302
Is there any major reason for doing this ? I've still seen some clusters /
workloads (as of Spark 1.3) where nio works better than netty and I didn't get
a chance
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8162#issuecomment-130978310
Yeah I'll try to produce a small test case for this next time I run into
this.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7461#issuecomment-129506547
Thanks @kmadhugit , the test case sounds good to me. Could you try this on
a multi-node cluster ? We just set the locality preference for the scheduler
and I guess
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129223162
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8055#issuecomment-129226373
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7494#issuecomment-129227462
@CHOIJAEHONG1 This diff is looking good. Can you update the PR with this ?
BTW why do we need to clear the encoding bit in `context.R` ? Does the
serialization
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129260689
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8055#issuecomment-129235074
@brkyvz Removed the test case now. I'm not sure adding a comment is really
relevant as the argument to the method is an `Option`. And you are right that
we can't quite
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129232005
@tdas @rxin I've see the test `SPARK-6222: Do not clear received block data
too soon` fail a bunch of times (on Jenkins and on local runs) - Example stack
trace at [1
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129528087
Yeah I was wondering the same thing. In fact I think the tests have managed
to pass piece-wise (i.e. core sometimes, streaming sometimes, sql sometimes
etc
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129506725
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/8194#discussion_r37092050
--- Diff: R/pkg/R/functions.R ---
@@ -67,6 +67,14 @@ createFunctions - function() {
createFunctions()
+#' @rdname functions
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/8194#discussion_r37092120
--- Diff: R/pkg/R/generics.R ---
@@ -682,6 +682,10 @@ setGeneric(cbrt, function(x) {
standardGeneric(cbrt) })
#' @export
setGeneric(ceil, function
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8194#issuecomment-131165945
Thanks @yu-iskw. I left some comments inline. One more thing we need to do
is to update our `NAMESPACE` file [1] with all the functions we have added so
far
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/8194#discussion_r37092216
--- Diff: R/pkg/R/generics.R ---
@@ -794,6 +806,10 @@ setGeneric(size, function(x) {
standardGeneric(size) })
#' @export
setGeneric(soundex
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8220#issuecomment-131306428
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7419#issuecomment-131412001
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7419#issuecomment-131412095
Thanks @sun-rui -- this fix is simple and looks good. Couple of minor things
1. There is an empty file `nohup.out` added in the PR which needs to be
removed
2
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8220#issuecomment-131412272
LGTM. I think we should merge this into `branch-1.5` as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8220#issuecomment-131412441
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8085#issuecomment-130124028
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-128884477
Sigh, I guess today is not a good day to get tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129013022
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7774#issuecomment-129021145
@zsxwing @rxin I just ran into a problem with the annotation thing that was
discussed earlier in this PR.
My master branch is clean and is at
https://github.com
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129058361
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7599#issuecomment-129059640
@brkyvz I ran into a trouble using `sparkR` with the CSV package from the
master branch and I think this PR might be the related to the problem. If I run
`./bin/sparkR
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129080680
@JoshRosen @rxin This PR has failed Jenkins 6 (!) times with unrelated
failures. Are there any known issues or flaky tests ? I'd like to get this fix
into 1.5
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7774#issuecomment-129081760
I just realized I missed pasting the first two lines of the error
```
error: error while loading SQLListener, Missing dependency 'bad symbolic
reference
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/8055
[CORE] [SPARK-9760] Use Option instead of Some for Ivy repos
This was introduced in #7599
cc @rxin @brkyvz
You can merge this pull request into a Git repository by running:
$ git
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8055#issuecomment-129088465
Also I am not sure how to add a test case for this as our existing unit
tests use a custom Ivy repository which is always specified in
`--repositories`. The bug only
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8055#issuecomment-129100227
Added a test case but I'm not sure how useful it is as a lot of things can
throw a RuntimeException ? @rxin let me know if its worth having it vs. a few
seconds added
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/8008#issuecomment-129103841
I give up. I'm going to manually patch this on a clean branch on my laptop
and run `dev/run-tests` and report back if it passes
---
If your project is set up
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7912#discussion_r36206750
--- Diff: docs/README.md ---
@@ -8,6 +8,16 @@ Read on to learn more about viewing documentation in plain
text (i.e., markdown)
documentation yourself
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7883#issuecomment-128426622
Thanks @yu-iskw for the update. The code changes mostly look good to me.
Lets wait for @JoshRosen to take another look at this as there have been some
flaky builds
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7584#issuecomment-123773927
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7584#issuecomment-123775036
I think most of the changes are fine. The `!` ones look a little awkward
and I want to check the `prev@func` issue to see why lint-r is complaining
about
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7584#issuecomment-123782306
@shaneknapp Can you manually delete the jar ? The script should
auto-download a good version then AFAIK
---
If your project is set up for it, you can reply
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7584#issuecomment-12319
@shaneknapp @JoshRosen Looks like there is a problem with Jenkins. The
builds seem to be failing with
```
Launching sbt from build/sbt-launch-0.13.7.jar
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7318#issuecomment-123818662
@mengxr I guess we will have a new PR for the documentation update, so this
PR LGTM. I will merge this unless you have anything else to add
---
If your project is set
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7584#discussion_r35237914
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1384,7 +1384,7 @@ setMethod(saveAsTable,
org.apache.spark.sql.parquet
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7596#issuecomment-123788290
I am not sure taking in the credentials in plain text is the best solution
-- Can we get this from some other environment variable and / or boto's config
files etc
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7584#discussion_r35237834
--- Diff: R/pkg/R/RDD.R ---
@@ -85,7 +85,7 @@ setMethod(initialize, PipelinedRDD, function(.Object,
prev, func, jrdd_val)
isPipelinable
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7584#discussion_r35437342
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1384,7 +1384,7 @@ setMethod(saveAsTable,
org.apache.spark.sql.parquet
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7640#issuecomment-124569224
LGTM. I had one minor comment -- I'll fix this up during merge
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/7640#discussion_r35437569
--- Diff: R/pkg/R/sparkR.R ---
@@ -104,16 +104,14 @@ sparkR.init - function(
return(get(.sparkRjsc, envir = .sparkREnv
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/7419#issuecomment-124569785
@sun-rui Any update on this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
801 - 900 of 2516 matches
Mail list logo