Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
@felixcheung Checking whether it is running from shell is not exactly the
same as checking which shell is calling it. My approach is depends on the fact
that the Logging trait is used in three
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
@felixcheung "I think we might have a need to create a helper for "am I
running in the SparkR shell" function?" Do you mean for #14258 ? Not for this
PR, right?
---
If yo
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
@felixcheung I checked which object calls the log and print message
accordingly.
./bin/sparkR
R version 3.3.0 (2016-05-03) -- "Supposedly Educational"
Copyrigh
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
Jenkins, re-test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
The failure seems unrelated to the change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/13647
ping @mengxr @jkbradley @yanboliang Can you give me some comments on this
PR? I can start improving it for 2.1+.
Thanks!
---
If your project is set up for it, you can reply
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
@felixcheung I will try to retrieve terminal/shell type before printing out
the message. I will update the PR if I can find a way of doing that. Thanks!
---
If your project is set up
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
@felixcheung Let me check the code of launching the sparkR shell. Can you
point me the code?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14182
@junyangq I added the weightCol. Can you take a look?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14433
@shivaram I found spark-shell and pyspark using the same message:
Python 2.7.11 |Anaconda 2.4.0 (x86_64)| (default, Dec 6 2015, 18:57:58)
[GCC 4.2.1 (Apple Inc. build 5577
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14182
@junyangq Yanbo has pointed me an example of adding weightcol. I will
update this PR tomorrow.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/14433
[SPARK-16829][SparkR]:sparkR sc.setLogLevel doesn't work
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
./bin/sparkR
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14182
@junyangq I add a check in the wrapper to make sure that the right hand
size is 1 and add a unit test.
For the optional weight, can you give me some explanation or function
signature
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14182
@felixcheung @yanboliang @junyangq I have addressed all your comments.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r72677362
--- Diff: R/pkg/R/mllib.R ---
@@ -292,6 +299,44 @@ setMethod("summary", signature(object =
"NaiveBayesModel"),
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r72676030
--- Diff: R/pkg/inst/tests/testthat/test_mllib.R ---
@@ -454,4 +454,20 @@ test_that("spark.survreg", {
}
})
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r72675951
--- Diff: R/pkg/R/mllib.R ---
@@ -526,6 +571,15 @@ setMethod("write.ml", signature(object =
"KMeansModel"
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r72675873
--- Diff: R/pkg/R/mllib.R ---
@@ -292,6 +299,44 @@ setMethod("summary", signature(object =
"NaiveBayesModel"),
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r72675826
--- Diff: R/pkg/R/mllib.R ---
@@ -292,6 +299,44 @@ setMethod("summary", signature(object =
"NaiveBayesModel"),
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r72675653
--- Diff: R/pkg/R/mllib.R ---
@@ -292,6 +299,44 @@ setMethod("summary", signature(object =
"NaiveBayesModel"),
Github user wangmiao1981 closed the pull request at:
https://github.com/apache/spark/pull/14098
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14098
As #14317 has been merged, I close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14317#discussion_r71976456
--- Diff: examples/src/main/python/sql/datasource.py ---
@@ -0,0 +1,154 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14317#discussion_r71976400
--- Diff: examples/src/main/python/sql/basic.py ---
@@ -0,0 +1,194 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14317#discussion_r71976352
--- Diff: docs/sql-programming-guide.md ---
@@ -79,7 +79,7 @@ The entry point into all functionality in Spark is the
[`SparkSession`](api/java
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14098
@liancheng Thanks! I will review the PR #14317
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14098
@liancheng I addressed all your comments. Except: 1). 2-spaces indents; I
tried it, but it failed on python style tests. So I leave it 4-spaces indents;
2) `col('...')` I haven't changed
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14098
@liancheng Sorry for replying late. I was on vacation last a few days.
I have addressed most of your comments. Only the .md file is not updated
yet.
By the way, I am trying
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14098
Not completed. Please hold on for review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14182
test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14182
@vectorijk There is no scala implementation of `summary()`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r70682459
--- Diff: R/pkg/R/mllib.R ---
@@ -53,6 +53,13 @@ setClass("AFTSurvivalRegressionModel",
representation(jobj = "jobj"))
#' @no
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/14182
[SPARK-16444][WIP][SparkR]: Isotonic Regression wrapper in SparkR
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
Add
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14098
@liancheng Thanks for your review! I will address your comments asap.
Currently, I am working on a ML wrapper for R.
---
If your project is set up for it, you can reply to this email
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/14098
@liancheng Can you review it? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/14098
[SPARK-16380][SQL][Example]:Update SQL examples and programming guide for
Python language binding
## What changes were proposed in this pull request?
(Please fill in changes proposed
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/14021
[SPARK-16260][ML][Example]:PySpark ML Example Improvements and Cleanup
## What changes were proposed in this pull request?
1). Remove unused import in Scala example;
2). Move
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/13647
@yanboliang As we discussed offline, I made the implementation. You might
want to take a pass on the implementation. cc @mengxr @josephkb
---
If your project is set up for it, you can reply
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13755
[SPARK-16040][MLlib][DOC]:spark.mllib PIC document extra line of refernece
## What changes were proposed in this pull request?
In the 2.0 document, Line "A full ex
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13647
[SPARK-15784][ML][WIP]:Add Power Iteration Clustering to spark.ml
## What changes were proposed in this pull request?
This PR is to add Power Iteration Clustering to spark.ml
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/13508
@felixcheung Add isnan in the NAMESPACE file.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13476#discussion_r65842260
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -1137,6 +1137,13 @@ test_that("string operators", {
expect_equal(count(wher
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/13508
@felixcheung As we discussed, I opened a JIRA on exporting the is.nan. Can
you review it? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/13476
@felixcheung Any more comments? The version check is needed otherwise it
can't pass unit tests. All other review comments are done.
---
If your project is set up for it, you can reply
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13508
[SPARK-15766][SparkR]:R should export is.nan
## What changes were proposed in this pull request?
When reviewing SPARK-15545, we found that is.nan is not exported, which
should
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13476#discussion_r65742476
--- Diff: R/pkg/R/column.R ---
@@ -151,6 +151,40 @@ setMethod("substr", signature(x = "Column"),
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13476#discussion_r65657352
--- Diff: R/pkg/R/generics.R ---
@@ -695,10 +695,6 @@ setGeneric("desc", function(x) {
standardGeneric("desc") })
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/13476
@shivaram It seems that integration server uses a lower version of R. Do
you know whether has a mechanism to optionally compiling functions by checking
version? For example, in C, we can use
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13476
[SPARK-15684][SparkR]Not mask startsWith and endsWith in R
## What changes were proposed in this pull request?
In R 3.3.0, startsWith and endsWith are added. In this PR, I make
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13301#issuecomment-20897
@MLnick Done. removed seed in python and java
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13301#issuecomment-222040536
@MLnick I am on travel now. I will update it on Saturday. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221786544
@shivaram I will create a JIRA soon. Thursday and Friday, I will be on
travel to NYC. Will do it on Saturday.
---
If your project is set up for it, you can
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13301#issuecomment-221722166
@srowen I think we can delete it. Let me double check it and update this
PR. Thanks!
---
If your project is set up for it, you can reply to this email and have
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13301#issuecomment-221722860
I grep all scala/java/py files and there is no reference to the data file.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13301#issuecomment-221706864
@srowen ML uses the sample_libsvm_data.txt in all three examples.
sample_naive_bayes_data.txt is not in libsvm format. The format is shown
below:
0,1
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13301
[SPARK-15449][MLlib][Example]:Wrong Data Format - Documentation Issue
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13266#issuecomment-221681098
@MLnick Done. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13266#issuecomment-221670964
@MLnick Sure. I will do it soon. Now, I am debugging a R bug. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221668874
@shivaram I am debugging and try to find a hint.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221656579
Re-tested on Ubuntu, the pipedRDD test case still fails. R version 3.3.0
beta (2016-03-30 r70404)
---
If your project is set up for it, you can reply
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221641136
I comment out the failed case. There is no other failures. As mentioned by
@shivaram , we can try to fix the pipeRDD in a separate JIRA. I suspect it is
related
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221487340
@vectorijk please run conflicts(detail = TRUE) and check the
$package:SparkR with test_context.R, namesOfMasked.
3.2.0 should have more methods than 3.1.0
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221486087
@vectorijk As we discussed above, R version is different on local and
Jenkins. We installed R 3.3.0 in local while Jenkins still uses the old version
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13284#discussion_r64520919
--- Diff: R/pkg/inst/tests/testthat/test_context.R ---
@@ -24,10 +24,18 @@ test_that("Check masked functions", {
func <- lapply(mas
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221483353
@shivaram @felixcheung For the failure pipedRDD test case, if I copy &
paste it in the SparkR, it works fine.
---
If your project is set up for it, you can r
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221477072
@shivaram I will make the change with R version check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221476011
@shivaram The pipedRDD one seems working when using sudo in Linux. My mac
does not work though.
---
If your project is set up for it, you can reply to this email
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221475769
@felixcheung
> conflicts(detail = TRUE)
$.GlobalEnv
[1] "df"
$`package:SparkR`
[1] "alias"
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13284#issuecomment-221475397
R version 3.3.0 (2016-05-03) -- "Supposedly Educational"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-apple-da
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13284
[SPARK-15439][SparkR]:Failed to run unit test in SparkR
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
There are some failures
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13266#issuecomment-221345524
@jerryshao We are going through the examples one by one now. In last a few
weeks, we have many of these fixed. The intention is making the examples as
consistent
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13266#issuecomment-221168009
@jerryshao We have several similar bugs fixed. I am doing QA for ML 2.0
document now.
---
If your project is set up for it, you can reply to this email and have
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13266
[SPARK-15492][ML][DOC]:Binarization scala example copy & paste to
spark-shell error
## What changes were proposed in this pull request?
(Please fill in changes proposed in this
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13213
[SPARK-15363][ML][Example]:Example code shouldn't use VectorImplicits._,
asML/fromML
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13163#discussion_r63988573
--- Diff:
launcher/src/test/java/org/apache/spark/launcher/SparkSubmitCommandBuilderSuite.java
---
@@ -59,6 +59,18 @@ public void
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13163#discussion_r63970749
--- Diff:
launcher/src/test/java/org/apache/spark/launcher/SparkSubmitCommandBuilderSuite.java
---
@@ -59,6 +59,17 @@ public void
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13163#issuecomment-220202347
@vanzin I will check and fix the existing Unit test failure tonight.
[error] Test
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13163#issuecomment-220198683
@vanzin Yes, that is what I mean and I want to confirm with you. I only
check no exception.
---
If your project is set up for it, you can reply to this email
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13163#issuecomment-220197352
@vanzin Based on my understanding, the help message is print on console
and the launcher should not expect help message String. So, will I expect no
exception
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13163#issuecomment-220186565
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13163#issuecomment-220163385
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13163#issuecomment-220106677
@vanzin Sure. I will do it after I fully understand the logic. Good to
learn how Spark submit works. Thanks for your time!
---
If your project is set up
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13163#issuecomment-220102228
@vanzin Thanks for your clarification. Let me learn and revise the change.
'--help' is also broken.
---
If your project is set up for it, you can reply
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13163#discussion_r63747463
--- Diff: launcher/src/main/java/org/apache/spark/launcher/Main.java ---
@@ -101,6 +110,80 @@ public static void main(String[] argsArray) throws
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13163
[SPARK-15360][Spark-Submit]Should print spark-submit usage when no
arguments is specified
## What changes were proposed in this pull request?
(Please fill in changes proposed
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12788#issuecomment-219617189
@MLnick I made changes accordingly. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13110#issuecomment-219484431
@yanboliang For JAVA example, class Rating is equivalent to case class
Rating in scala example. It is not straightforward to remove class Rating in
JAVA. I remove
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12788#issuecomment-219315029
ping @MLnick @sethah @zhengruifeng @yanboliang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13110#discussion_r63284035
--- Diff:
examples/src/main/scala/org/apache/spark/examples/ml/ALSExample.scala ---
@@ -28,7 +28,7 @@ object ALSExample {
// $example
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13110
[SPARK-15318][ML][Example]:spark.ml Collaborative Filtering example does
not work in spark-shell
## What changes were proposed in this pull request?
(Please fill in changes proposed
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13083#issuecomment-219109896
@MLnick I removed all other blank lines in the ml-clustering.md. One blank
line works.
---
If your project is set up for it, you can reply to this email and have
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13083#discussion_r63220262
--- Diff: docs/ml-clustering.md ---
@@ -116,6 +116,8 @@ Refer to the [Python API
docs](api/python/pyspark.ml.html#pyspark.ml.clustering
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/13083#issuecomment-218955997
cc @zhengruifeng @yanboliang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12969#issuecomment-218910819
@srowen I have made changes as we discussed. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user wangmiao1981 opened a pull request:
https://github.com/apache/spark/pull/13083
[SPARK-15305][ML][DOC]:spark.ml document Bisectiong k-means has the
incorrect format
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12882#issuecomment-218910741
@srowen I have made the changes. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12788#issuecomment-218893524
@yanboliang @MLnick @zhengruifeng @sethah simplified the example using
kmeans data and addressed review comments.
---
If your project is set up for it, you can
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12882#issuecomment-218651885
@srowen I will make the suggested changes.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12922#issuecomment-218226268
@mengxr Thanks for your comments. As this one is a good-to-have feature, I
agree to postpone it to 2.1 and I will change the JIRA to porting binary
classification
Github user wangmiao1981 commented on the pull request:
https://github.com/apache/spark/pull/12882#issuecomment-218010920
@srowen Any comments on the change? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
501 - 600 of 688 matches
Mail list logo