GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/5179
[DOCUMENTATION]Fixed Missing Type Import in Documentation
Needed to import the types specifically, not the more general pyspark.sql
You can merge this pull request into a Git repository
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/8302
Fix Broken Link
Link was broken because it included tick marks.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/anabranch/spark patch-1
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/10179
[DOCS][SPARK-11964][SPARK-6725] Add in Pipeline Import/Export Documentation
let me know if you'd like me to change anything!
You can merge this pull request into a Git repository by running
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/10179#issuecomment-162666478
not sure if my notations are correct for the title @jkbradley, let me know
if I need to change anything!
---
If your project is set up for it, you can reply
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/10179#issuecomment-162706754
@jkbradley will make those changes shortly.
@BenFradet will make those changes as well.
---
If your project is set up for it, you can reply to this email
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/10179#issuecomment-163463790
@jkbradley does this work for you by the way?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/10179#issuecomment-163519922
@BenFradet integrated your feedback thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/10179#issuecomment-164033878
@jkbradley should be good to go now!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/10179#issuecomment-163845286
@jkbradley gotcha! I misinterpreted your last comments, my fault.
One thing I'm confused about though is that the Estimator, Transformer, and
Param section
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/10179#issuecomment-164013417
@jkbradley should be good to go! Sorry for being such a pain!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/10179#discussion_r47391102
--- Diff: docs/ml-guide.md ---
@@ -455,6 +459,17 @@ val pipeline = new Pipeline()
// Fit the pipeline to training documents.
val model
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/13916
[SPARK-16220] Revert Change to Bring Back SHOW FUNCTIONS Functionality
## What changes were proposed in this pull request?
- Fix tests regarding show functions functionality
- Revert
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/13916
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/11093
update documentation
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/anabranch/spark master
Alternatively you can review and apply
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/11094
fix dynamicAllocation documentation for all cluster managers
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/anabranch/spark dynamic-docs
Github user anabranch closed the pull request at:
https://github.com/apache/spark/pull/11093
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user anabranch commented on the pull request:
https://github.com/apache/spark/pull/13041#issuecomment-218341617
cc: @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/13041
fix SPARK-15264, add test cases
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/anabranch/spark master
Alternatively you can review
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13041#discussion_r62793571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/DefaultSource.scala
---
@@ -61,7 +61,9 @@ class DefaultSource extends
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790649
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790090
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790342
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790860
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68791843
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68792137
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68792300
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68792373
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68789246
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68790047
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,888 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
Manual Correctness tests:
Python
```
>>> from pyspark.sql.functions import to_date, to_timestamp, lit
>>>
spark.range(1).select(to_date(lit('2016-01-02'),'
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@cloud-fan great questions.
I thought that was strange too. However this is the **current** behavior as
well as Java `SimpleDateFormat`'s behavior. I did not implement that logic
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278746
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,64 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278738
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,64 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278789
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -1177,6 +1177,9 @@ test_that("column functions", {
c17 <- cov(c, c1) +
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99278624
--- Diff: R/pkg/R/functions.R ---
@@ -1746,7 +1750,7 @@ setMethod("toRadians",
#' to_date(df$c)
#' to_date(df$c, '
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung thanks, made those changes :). Hopefully this will start
passing sometime :P
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99283708
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,64 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r99371741
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,69 @@ case class ToDate
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung could you give me some pointers on these R functions? I don't
quite know if I am registering them correctly and they're failing my builds.
---
If your project is set up for it, you
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung Just tried that it doesn't seem to work.
Here's the strange thing, it should follow the *exact* same structure as
[unix_timestamp](https://github.com/apache/spark/blob/master
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung yup! Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470513
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column int
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470589
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column int
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@felixcheung Thank you for your feedback! Small request, can you tell me if
my R test case is sufficient for this? It doesn't seem like there is extensive
R testing right now for virtually any
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@cloud-fan - Reynold referred me to your for this test failure.
My two tests are failing because Hive tests *allegedly* cover something
like this.
```
SELECT to_date('2001
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97469259
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column int
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97469263
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column int
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97469354
--- Diff: R/pkg/R/generics.R ---
@@ -1274,6 +1270,14 @@ setGeneric("unbase64", function(x) {
standardGeneric("unbase64"
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470784
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column int
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470798
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column int
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r97470763
--- Diff: R/pkg/R/functions.R ---
@@ -1730,24 +1730,82 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column int
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
@cloud-fan The error i see is the one in this [test
case](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/71913/testReport/org.apache.spark.sql.catalyst/ExpressionToSQLSuite
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/15815
[DOCS][SPARK-18365] Documentation is Switched on Sample Methods
## What changes were proposed in this pull request?
The documentation for sample was switch for the two methods that take
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
Sounds good to me. I will update it shortly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
@srowen Think this is probably ready.
- [ ] Updated All Languages
- [ ] Updated Ticket Description
---
If your project is set up for it, you can reply to this email and have your
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
The test failure seems quite unrelated but we'll see if it happens again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r87699964
--- Diff: R/pkg/R/DataFrame.R ---
@@ -936,7 +936,9 @@ setMethod("unique",
#' Sample
#'
-#' Return a samp
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r87696862
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1612,7 +1612,9 @@ class Dataset[T] private[sql
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r87696853
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1612,7 +1612,9 @@ class Dataset[T] private[sql
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r88382198
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDD.scala ---
@@ -99,6 +99,8 @@ class JavaRDD[T](val rdd: RDD[T])(implicit val classTag
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/15815
failures also seem unrelated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/15815#discussion_r88273798
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDD.scala ---
@@ -99,6 +99,8 @@ class JavaRDD[T](val rdd: RDD[T])(implicit val classTag
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
More details are here:
https://gist.github.com/anabranch/7a42292593976878eb14e2d86a9966d4
This is completely perplexing to me.
---
If your project is set up for it, you can reply
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91403382
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,57 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91134199
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91134077
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16180
[DOCS][MINOR] Clarify Where AccumulatorV2s are Displayed
## What changes were proposed in this pull request?
This PR clarifies where accumulators will be displayed.
## How
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16180
@srowen that should be a bit better but please let me know if it's still
unclear.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91156380
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16180#discussion_r91865764
--- Diff: docs/programming-guide.md ---
@@ -1345,14 +1345,17 @@ therefore be efficiently supported in parallel.
They can be used to implement co
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16180
@srowen completed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785863
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2661,12 +2661,30 @@ object functions {
def unix_timestamp(s: Column
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785148
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2661,12 +2661,30 @@ object functions {
def unix_timestamp(s: Column
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16138
[WIP][Spark-16609] Add to_date with format function.
## What changes were proposed in this pull request?
This pull request adds a user facing `to_date` function that allows for a
format
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785106
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2666,7 +2666,18 @@ object functions {
* @group datetime_funcs
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90785129
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2661,12 +2661,31 @@ object functions {
def unix_timestamp(s: Column
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90788929
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala ---
@@ -351,34 +351,81 @@ class DateFunctionsSuite extends QueryTest
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r90899108
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DateFunctionsSuite.scala ---
@@ -351,34 +351,81 @@ class DateFunctionsSuite extends QueryTest
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071447
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -389,6 +389,20 @@ class
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071721
--- Diff: python/pyspark/sql/functions.py ---
@@ -143,6 +143,12 @@ def _():
'measured in radians.',
}
+_functions_2_2
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071450
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/DateExpressionsSuite.scala
---
@@ -389,6 +389,20 @@ class
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071556
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,60 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95071452
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -342,7 +342,8 @@ object FunctionRegistry
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
I believe now why my previous implementation did not work.
My implementation originally looked like this:
```scala
case class ParseToTimestamp(left: Expression, format
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r95214266
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1048,60 @@ case class ToDate
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16504
[SPAKR-19126][Docs] Update Join Documentation Across Languages
## What changes were proposed in this pull request?
- [X] Make sure all join types are clearly mentioned
- [X] Make
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/16505
[SPARK-19127][DOCS] Update Rank Function Documentation
## What changes were proposed in this pull request?
- [X] Fix inconsistencies in function reference for dense rank and dense
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16504
jenkins test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092780
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -730,8 +730,9 @@ def join(self, other, on=None, how=None):
a join expression (Column
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16505#discussion_r95092215
--- Diff: R/pkg/R/functions.R ---
@@ -3324,7 +3325,8 @@ setMethod("percent_rank",
#' The difference between rank and denseRank is that
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092726
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -730,8 +730,9 @@ def join(self, other, on=None, how=None):
a join expression (Column
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092148
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2313,9 +2313,9 @@ setMethod("dropDuplicates",
#' @param joinExpr (Optional) The expression used
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16504#discussion_r95092135
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2313,9 +2313,9 @@ setMethod("dropDuplicates",
#' @param joinExpr (Optional) The expression used
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
Now that my outputs are correct (in format), there's a new problem. The
types are *still* wrong.
```
scala> /// DETAILS
scala> // Schema
scala> spar
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91172169
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/16138#discussion_r91173386
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1047,6 +1047,53 @@ case class ToDate
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
and I found the error, I shouldn't be overriding the `DataType`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user anabranch opened a pull request:
https://github.com/apache/spark/pull/17695
[SPARK-20400][DOCS] Remove References to 3rd Party Vendor Tools
## What changes were proposed in this pull request?
Simple documentation change to remove explicit vendor references
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/17695
This should be on hold until a JIRA resolution, I'd like to hear what
others say.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/17695
Thanks for the info @srowen - this should be better now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
1 - 100 of 115 matches
Mail list logo