Github user rerngvit closed the pull request at:
https://github.com/apache/spark/pull/14264
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14264
Since SPARK-11977 didn't get merge and this PR is blocked on that. I
decided to close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14309
I have not got any replies on this. I presume that this is a soft reject
for this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rerngvit closed the pull request at:
https://github.com/apache/spark/pull/14309
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rerngvit commented on a diff in the pull request:
https://github.com/apache/spark/pull/14309#discussion_r73325142
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -896,6 +896,19 @@ class DataFrameSuite extends QueryTest
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14309
Yes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14309
@sun-rui I added more testcases for this. Please have a look. It would be
great, if you could enable the Jenkins CI testing. I don't have a permission to
do so.
---
If your project is set up
Github user rerngvit commented on a diff in the pull request:
https://github.com/apache/spark/pull/14309#discussion_r71986608
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -641,6 +641,10 @@ class DataFrameSuite extends QueryTest
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14309
@shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14309
@sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user rerngvit opened a pull request:
https://github.com/apache/spark/pull/14309
[SPARK-11977][SQL] Support accessing a column contains "." without backticks
## What changes were proposed in this pull request?
- Add support for accessing a dataframe column tha
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14264
@sun-rui Thanks. I understand now. I would submit a PR to SPARK-11977 first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rerngvit commented on a diff in the pull request:
https://github.com/apache/spark/pull/14264#discussion_r71624905
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -201,6 +201,8 @@ abstract class LogicalPlan
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14264
@sun-rui
> could you share the background that this PR can fix the issue.
As stated in https://issues.apache.org/jira/browse/SPARK-11976, Spark core
is already supporting column n
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14264
@felixcheung: in the recent patch, I removed checking for "." for function
colnames() and its test code for the file you indicated.
---
If your project is set up for it, you
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14264
@felixcheung
I removed prohibition for "." in the function "colnames()" for SparkR in
the recent patch and also its test in the file you mentioned (test_sparkSQL.R
L779
GitHub user rerngvit opened a pull request:
https://github.com/apache/spark/pull/14264
[SPARK-11976][SPARKR] Support "." character in DataFrame column name
## What changes were proposed in this pull request?
- Add support "." character in D
Github user rerngvit commented on the issue:
https://github.com/apache/spark/pull/14264
@sun-rui Please have a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user rerngvit commented on the pull request:
https://github.com/apache/spark/pull/8962#issuecomment-146783342
@sun-rui @felixcheung Thanks for the review. I revised according to your
comments. Please have a look.
---
If your project is set up for it, you can reply
Github user rerngvit commented on the pull request:
https://github.com/apache/spark/pull/8962#issuecomment-146695490
The errors seem not related to the PR. There might be an issue with Jenkins.
"ERROR: Timeout after 15 minutes
ERROR: Error fetching remote repo 'o
Github user rerngvit commented on the pull request:
https://github.com/apache/spark/pull/8962#issuecomment-145368055
@sun-rui I revised according to your comments. Please have a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user rerngvit opened a pull request:
https://github.com/apache/spark/pull/8962
[SPARK-10905][SparkR]: Export freqItems() for DataFrameStatFunctions
[SPARK-10905][SparkR]: Export freqItems() for DataFrameStatFunctions
- Add function (together with roxygen2 doc
Github user rerngvit commented on the pull request:
https://github.com/apache/spark/pull/8882#issuecomment-145187946
@yhuai Sorry for that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rerngvit commented on the pull request:
https://github.com/apache/spark/pull/8882#issuecomment-144946863
@jkbradley Thank you for your review. I updated the doc for avgMetrics
according to your comment. Please have a look.
---
If your project is set up for it, you can
Github user rerngvit commented on a diff in the pull request:
https://github.com/apache/spark/pull/8882#discussion_r40483611
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -140,7 +140,11 @@ class CrossValidator(override val uid: String
Github user rerngvit commented on a diff in the pull request:
https://github.com/apache/spark/pull/8882#discussion_r40295753
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -140,7 +140,11 @@ class CrossValidator(override val uid: String
GitHub user rerngvit opened a pull request:
https://github.com/apache/spark/pull/8882
[SPARK-9798] [ML] CrossValidatorModel Documentation Improvements
Document CrossValidatorModel members: bestModel and avgMetrics
You can merge this pull request into a Git repository by running
27 matches
Mail list logo