Github user mpjlu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75289389
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -189,11 +228,35 @@ class ChiSqSelector @Since("1.3.0") (
*/
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14666#discussion_r75289101
--- Diff: R/pkg/R/utils.R ---
@@ -689,3 +689,33 @@ getSparkContext <- function() {
sc <- get(".sparkRjsc", envir = .sparkREnv)
sc
}
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14700
Although I don't know aarch64, a little desk research suggests it can
support unaligned access. OK.
---
If your project is set up for it, you can reply to this email and have your
reply appear on Gi
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14700
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14597
I think this will require a little update to the Python API to match. Not
sure about SparkR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75288190
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -171,14 +177,47 @@ object ChiSqSelectorModel extends
Loader[ChiSqSel
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75287957
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -189,11 +228,35 @@ class ChiSqSelector @Since("1.3.0") (
*/
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14687
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user mpjlu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75286558
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/ChiSqSelector.scala ---
@@ -54,6 +54,29 @@ private[feature] trait ChiSqSelectorParams extends Params
Github user mpjlu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75286500
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -171,14 +177,47 @@ object ChiSqSelectorModel extends
Loader[ChiSqSele
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14613#discussion_r75286379
--- Diff: R/pkg/R/DataFrame.R ---
@@ -392,7 +392,11 @@ setMethod("coltypes",
}
if (is.null(type)) {
-
Github user mpjlu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75286355
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -189,11 +228,35 @@ class ChiSqSelector @Since("1.3.0") (
*/
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14687
LGTM. Merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user mpjlu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75286399
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -189,11 +228,35 @@ class ChiSqSelector @Since("1.3.0") (
*/
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14613#discussion_r75286274
--- Diff: R/pkg/R/DataFrame.R ---
@@ -392,7 +392,11 @@ setMethod("coltypes",
}
if (is.null(type)) {
-
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75286150
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -189,11 +228,35 @@ class ChiSqSelector @Since("1.3.0") (
*/
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75285942
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -189,11 +228,35 @@ class ChiSqSelector @Since("1.3.0") (
*/
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14613#discussion_r75285945
--- Diff: R/pkg/R/DataFrame.R ---
@@ -392,7 +392,11 @@ setMethod("coltypes",
}
if (is.null(type)) {
-
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75285867
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/feature/ChiSqSelector.scala ---
@@ -171,14 +177,47 @@ object ChiSqSelectorModel extends
Loader[ChiSqSel
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14639
hmm, which is to my point
[here](https://github.com/apache/spark/pull/14639#discussion_r75117822) - why
isn't it working in yarn-cluster mode? is SPARK_HOME not set?
---
If your project is set
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75285537
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/ChiSqSelector.scala ---
@@ -91,8 +132,38 @@ final class ChiSqSelector @Since("1.6.0")
(@Since("1.6
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75285419
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/ChiSqSelector.scala ---
@@ -54,6 +54,29 @@ private[feature] trait ChiSqSelectorParams extends Param
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14558
Having two `... description` in the API doc can be confusing. It's hard to
tell which is for the generic and which is for the function. It would be more
confusing if their descriptions are somew
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14597#discussion_r75285293
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/ChiSqSelector.scala ---
@@ -67,9 +90,27 @@ final class ChiSqSelector @Since("1.6.0")
(@Since("1.6.
Github user KevinZwx commented on the issue:
https://github.com/apache/spark/pull/9097
This issue was marked as fixed in spark 2.0.0, but
"spark.sql.mapper.splitCombineSize" doesn't show up in the list of the SQL
configuration when I run command "spark.sql("SET -v").show(numRows = 200
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/14558
I am sure if it sounds like a reasonable temporary solution that we just
insert the `...` into the doc of some function definition, even though that
`...` may not really exist in that function. Tha
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14700
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
GitHub user yimuxi opened a pull request:
https://github.com/apache/spark/pull/14700
[SPARK-17127]Make unaligned access in unsafe available for AArch64
## # What changes were proposed in this pull request?
From the spark of version 2.0.0 , when MemoryMode.OFF_HEAP is set ,
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14551
@nicklavers I agree it's not great to assume that the current output is
correct, though I strongly suspect it is. We'd ideally do some analysis to
understand what the expected range of outcomes are
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/14558
I realized later that by doing so, there will be at least two `...` in the
doc. However, they have slightly different meanings.
- generic function: it means "additional parameters"
- actual
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14676
@petermaxlee this looks pretty good. I left two final comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14676#discussion_r75277151
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala
---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14676#discussion_r75276635
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala
---
@@ -0,0 +1,109 @@
+/*
+ * Licensed to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14698
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63979/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14698
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14698
**[Test build #63979 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63979/consoleFull)**
for PR 14698 at commit
[`4fc1ec5`](https://github.com/apache/spark/commit/
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14666#discussion_r75275020
--- Diff: R/pkg/R/utils.R ---
@@ -689,3 +689,33 @@ getSparkContext <- function() {
sc <- get(".sparkRjsc", envir = .sparkREnv)
sc
}
+
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14676#discussion_r75274360
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveInlineTables.scala
---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/14384
@mengxr If we only want `rank`, `userFactors`, `itemFactors`, `userCol` and
`itemCol`, we don't have to save metadata, but if we want any more, it seems
that they are not saved in the output of the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14699
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63981/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14699
**[Test build #63981 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63981/consoleFull)**
for PR 14699 at commit
[`5540366`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14699
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75273184
--- Diff: R/pkg/inst/tests/testthat/test_mllib.R ---
@@ -454,4 +454,61 @@ test_that("spark.survreg", {
}
})
+test_that("spark.als", {
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75272624
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/ALSWrapper.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/12790
@BryanCutler I found when fitting a Scala Pipeline with no stages,
```
val pipeline = new Pipeline()
val model = pipeline.fit(df)
```
it throw exceptions:
```
Failed to
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r75272474
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/ALSWrapper.scala ---
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14467#discussion_r75271949
--- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
---
@@ -889,21 +892,42 @@ private class PythonAccumulatorParam(@transient
private v
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12790
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63980/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12790
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12790
**[Test build #63980 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63980/consoleFull)**
for PR 12790 at commit
[`e1df580`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14699
**[Test build #63981 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63981/consoleFull)**
for PR 14699 at commit
[`5540366`](https://github.com/apache/spark/commit/5
GitHub user zjffdu opened a pull request:
https://github.com/apache/spark/pull/14699
[SPARK-17125][SPARKR] Allow to specify spark config using non-string type
in SparkR
## What changes were proposed in this pull request?
Allow to set spark configuration using non-string typ
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14576
merging to master/2.0!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14576
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14697
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14697
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63978/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14697
**[Test build #63978 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63978/consoleFull)**
for PR 14697 at commit
[`bd64ade`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12790
**[Test build #63980 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63980/consoleFull)**
for PR 12790 at commit
[`e1df580`](https://github.com/apache/spark/commit/e
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75267415
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +163,172 @@ private[spark] class HiveExternalCatalog(c
Github user Stibbons commented on the issue:
https://github.com/apache/spark/pull/14180
We are implementing Mesos here (may take a while). While not so many people
use it, on the paper it looks great ;)
Please mail me at gaetan[a t]xeberon.net if is easier for you (it is for
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14576
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63977/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14576
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14576
**[Test build #63977 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63977/consoleFull)**
for PR 14576 at commit
[`50ed0d8`](https://github.com/apache/spark/commit/
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/12790
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and w
Github user gurvindersingh commented on the issue:
https://github.com/apache/spark/pull/13950
@ajbozarth That is strange. Here is the steps I used to test and its
working on my side
```
1. git clone https://github.com/apache/spark.git
2. edit .git/config to allow fetch
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r7528
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +161,147 @@ private[spark] class HiveExternalCatalog(c
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14180
I can help if you have any question regarding spark on yarn. For mesos,
since not so many people use it, we may put it another ticket.
---
If your project is set up for it, you can reply to this e
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14672
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14672
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
Github user Stibbons commented on the issue:
https://github.com/apache/spark/pull/14180
Actually I was waiting for #14567 to be reviewed and merged :(
I might have some questions on how Spark deploys Python script on YARN or
Mesos if you know how it works
---
If your project
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/14467#discussion_r75263947
--- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
---
@@ -889,21 +892,42 @@ private class PythonAccumulatorParam(@transient
private
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14672
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14672
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63976/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14672
**[Test build #63976 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63976/consoleFull)**
for PR 14672 at commit
[`2eb02c1`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14676
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14676
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63975/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14676
**[Test build #63975 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63975/consoleFull)**
for PR 14676 at commit
[`fb9de34`](https://github.com/apache/spark/commit/
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14467#discussion_r75260662
--- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
---
@@ -889,21 +892,42 @@ private class PythonAccumulatorParam(@transient
private v
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14698
**[Test build #63979 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63979/consoleFull)**
for PR 14698 at commit
[`4fc1ec5`](https://github.com/apache/spark/commit/4
GitHub user lw-lin opened a pull request:
https://github.com/apache/spark/pull/14698
[SPARK-17061][SPARK-17093][SQL] `MapObjects` should make copies of
unsafe-backed data
## What changes were proposed in this pull request?
Currently `MapObjects` does not make copies of unsa
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14674#discussion_r75259697
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -282,6 +282,11 @@ private[spark] class SecurityManager(sparkConf:
SparkConf)
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14693#discussion_r75259542
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -522,7 +522,7 @@ public long spill() throws IOExce
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/13796#discussion_r75258437
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/MultinomialLogisticRegression.scala
---
@@ -0,0 +1,611 @@
+/*
+ * Licensed to the A
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75258409
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +163,172 @@ private[spark] class HiveExternalCatalog(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r75257704
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -144,16 +161,147 @@ private[spark] class HiveExternalCatalog(
601 - 685 of 685 matches
Mail list logo