Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17358
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
@tcondie and @zsxwing any comments on this patch. I would be happy, if this
bug is fixed before 2.2 is released.
---
If your project is set up for it, you can reply to this email and have your
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17358
Actually thank you for your comments, I missed one file in swarm of errors
from generated files.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17358
@HyukjinKwon, Fixing those errors require fixing unidoc plugin's java code
generation. IMO, that has a broader scope. What do you think ?
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17357
@markhamstra, thanks for pointing out. Do you think the new title is okay ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17357
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17358#discussion_r106875992
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/crypto/ClientChallenge.java
---
@@ -28,7 +28,7 @@
/**
* The client
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17358
Hi @HyukjinKwon !, Thank for looking at this. I did one pass through all
those errors, they seem to be inside the generated code. Please notice
`/target` in the path.
---
If your project
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/17358
[SPARK-20027][DOCS] Compilation fix in java docs.
## What changes were proposed in this pull request?
During build/sbt publish-local, build breaks due to javadocs errors. This
patch
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/17357
[SPARK-20025][CORE] Fix spark's driver failover mechanism.
## What changes were proposed in this pull request?
In a bare metal system with No DNS setup, spark may be configured
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/17308
Please take a look, @tcondie @zsxwing !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r106344532
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaWriteTask.scala
---
@@ -32,7 +31,7 @@ import
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/17308
[SPARK-19968][SS] Use a cached instance of `KafkaProducer` instead of
creating one every batch.
## What changes were proposed in this pull request?
Changes include a new API for doing
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/15962
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15962
You are right @marmbrus , I am sorry for not actually doing the proper
comparison.
Only after seeing the code I realized, we use `consumer.poll(0)` not for
pulling records
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/15962
[SPARK-18526][SQL][KAFKA] Allow users to configure max.poll.records.
## What changes were proposed in this pull request?
In Kafka source for structured streaming the value
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/15262
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15262
I was going to close this for now.
@srowen Those deps should not have changed, I have not added anything to
the compile scope. I have not analyzed the working of those deps generation
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15262
This is just a testing artifact at the moment.
At some point, I had these tests failing with spark nightly build.
Something somewhere changed (which I did no pursue what
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r82371339
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -378,6 +378,24 @@ class FileStreamSourceSuite
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15258
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15258
retest, this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r81689922
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -311,6 +311,37 @@ final class DataStreamReader
private
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r81689547
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -21,13 +21,13 @@ import scala.collection.JavaConverters
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14151#discussion_r81292587
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/text/TextFileFormat.scala
---
@@ -99,8 +100,22 @@ class TextFileFormat
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15258
As we suspected, _SUCCESS file does not appear on copying file in HDFS. So
it can not be a trusted way to know, that input directory is not partial output
of a failed job. Problem, that spark
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/15258
@frreiss That is correct, I will look into it. Moving the PR to WIP, as at
least I would like to document these situations.
---
If your project is set up for it, you can reply to this email
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/15262
[SPARK-17690][STREAMING][SQL] Add mini-dfs cluster based tests for
FileStreamSourceSuite.
## What changes were proposed in this pull request?
Added a few HDFS based tests, for some
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/15258
[SPARK-17689][SQL][STREAMING][WIP] added excludeFiles option for file
source.
## What changes were proposed in this pull request?
added excludeFiles specially( but not limited
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/13943
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
Hey @rxin, do you have further comments ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14553
I have tested the PR with my MQTT connector. Looks like I do not have
sufficient privilege to command jenkins.
---
If your project is set up for it, you can reply to this email and have your
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
Thanks @gatorsmile. I was actually wondering, where can I document this
option.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14553
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@rxin Ping !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14151#discussion_r74903349
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -533,6 +533,12 @@ object SQLConf {
.timeConf
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14151#discussion_r74902549
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/text/TextSuite.scala
---
@@ -39,6 +39,11 @@ class TextSuite extends
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14087
@tdas Ping !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@rxin Ping !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
@rxin Do you think it looks okay now ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14087
@marmbrus Do you think this is useful ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
I have a question, should we keep a column with filenames ? in current
approach we ignore key column.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14151#discussion_r70564698
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFileWholeTextReader.scala
---
@@ -0,0 +1,53
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/14151
Actually what you said sounds like a nice idea, I was considering is it
possible to propogate this as an option in all other formats like CSV and Json
too ?
---
If your project is set up
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14151#discussion_r70564514
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFileWholeTextReader.scala
---
@@ -0,0 +1,53
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/14151
[SPARK-16496][SQL] Add wholetext as data source for SQL.
## What changes were proposed in this pull request?
In multiple text analysis, problems it is not often desirable for the rows
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r70045502
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -281,6 +281,31 @@ final class DataStreamReader
private
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r70044030
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -331,6 +331,24 @@ class FileStreamSourceSuite
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r70043723
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -281,6 +281,31 @@ final class DataStreamReader
private
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r70043361
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -281,6 +281,31 @@ final class DataStreamReader
private
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r70024634
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -281,6 +281,31 @@ final class DataStreamReader
private
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/14087#discussion_r70024651
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -281,6 +281,31 @@ final class DataStreamReader
private
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/14087
[SPARK-16411][SQL][STREAMING] Add textFile to Structured Streaming.
## What changes were proposed in this pull request?
Adds the textFile API which exists in DataFrameReader and serves
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13978
Looks good !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68922926
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,1156 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68922852
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,1156 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68921553
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,1156 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13945#discussion_r68920269
--- Diff: docs/structured-streaming-programming-guide.md ---
@@ -0,0 +1,1156 @@
+---
+layout: global
+displayTitle: Structured Streaming
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13839
Merged in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/13943
[SPARK-16251][CORE] Fix Flaky test - LocalCheckpointSuite's - missing
checkpoint blockâ¦
## What changes were proposed in this pull request?
On running LocalCheckpointSuite repeatedly
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13839
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13839
retest this please.
Failed tests: (may be flaky.)
```
- missing checkpoint block fails with informative message *** FAILED ***
(41 milliseconds)
[info] Collect should have
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13839
@rxin and @shivaram Please take a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68487475
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -251,7 +253,11 @@ class Dataset[T] private[sql](
case seq: Seq
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68479735
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -282,8 +284,18 @@ def show(self, n=20, truncate=True):
| 2|Alice|
| 5| Bob
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68356736
--- Diff: R/pkg/R/DataFrame.R ---
@@ -177,8 +177,8 @@ setMethod("isLocal",
#' @param x A SparkDataFrame
#' @param numRows The numb
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68184305
--- Diff: R/pkg/R/DataFrame.R ---
@@ -194,7 +195,13 @@ setMethod("isLocal",
setMethod("showDF",
signature
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68179336
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -267,11 +267,13 @@ def isStreaming(self):
return self._jdf.isStreaming
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13839
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68036752
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -251,7 +253,11 @@ class Dataset[T] private[sql](
case seq: Seq
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13839#discussion_r68034224
--- Diff: .gitignore ---
@@ -77,3 +77,4 @@ spark-warehouse/
# For R session data
.RData
.RHistory
+.Rhistory
--- End diff
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/13839
[SPARK-16128][SQL] Add truncateTo parameter to Dataset.show function.
## What changes were proposed in this pull request?
Allowing truncate to a specific number of character
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13661
@srowen and @zsxwing Please take a look !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/13661
[SPARK-15942][REPL] Unblock `:reset` command in REPL.
## What changes were proposed in this pull
(Paste from JIRA issue.)
As a follow up for SPARK-15697, I have following semantics
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/13575#discussion_r66433407
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -246,7 +247,12 @@ case class DataSource
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13574
@zsxwing Please take a look !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13437
I have decided to leave out reset as blocked as we can discuss the
semantics of having it in an another issue.
@zsxwing Please take a look !
---
If your project is set up for it, you
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/13574
[SPARK-15841] REPLSuite has incorrect env set for a couple of tests.
## What changes were proposed in this pull request?
Description from JIRA.
In ReplSuite, for a test that can
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13564
Thanks for doing this. It will definitely save time, and some CPU
cycles(energy) for all the first time builds.
---
If your project is set up for it, you can reply to this email and have your
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13437
@hvanhovell No, to fix this in 2.10 will require more work and not worth
the effort in terms of maintenance.
---
If your project is set up for it, you can reply to this email and have your
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/13437
To reinitialize the SparkSession is pretty trivial, consequence would be on
`reset` all the executors will also be reset that is complete wipe of all the
data. I think that is expected
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/13437
About `reset` that holds true, but others were simply not functional in our
customized version of REPL. I will see if something can be done about reset
too.
Would you like
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/13437
[SPARK-15697] [REPL] Unblock some of the useful repl commands.
## What changes were proposed in this pull request?
Unblock some of the useful repl commands. like, "impl
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/12358#issuecomment-218112396
Please remove all the unrelated code changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/5
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user ScrapCodes reopened a pull request:
https://github.com/apache/spark/pull/5
[SPARK-13231] Make count failed values a user facing API.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ScrapCodes/spark 13231
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/12616#issuecomment-214622436
This is also supported by hadoop. For example, we can pass the globs in
sc.textFile("/dir/*/*"). I was wondering, if it will be implemented again.
-
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/12358#issuecomment-214176468
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6848#issuecomment-212859860
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/6848#issuecomment-211739456
IMO, this is useful in one way that hadoop configuration need not be a
global state. We can have a default set of configuration that we use everywhere
as a default
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/12433#discussion_r60173470
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -260,7 +260,12 @@ private[spark] class BlockManager(
def
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5#issuecomment-210324204
@andrewor14 As far as I knew, it worked fine. There is a check in
Task.collectAccumulatorUpdates, you mean that check is not sufficient ? Ahh
looking at the jira, I
Github user ScrapCodes closed the pull request at:
https://github.com/apache/spark/pull/5
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/11178#discussion_r55652534
--- Diff: project/SparkBuild.scala ---
@@ -384,18 +384,19 @@ object OldDeps {
lazy val project = Project("oldDeps", file("
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/5#issuecomment-193112524
@andrewor14 Can you take a look ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/11410#discussion_r54555457
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/OuterScopes.scala
---
@@ -39,4 +41,35 @@ object OuterScopes {
def
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/11410#discussion_r54401926
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/OuterScopes.scala
---
@@ -39,4 +41,35 @@ object OuterScopes {
def
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/11178#issuecomment-185137373
Not sure what is causing this:
```
git fetch --tags --progress https://github.com/apache/spark.git
+refs/pull/11178/*:refs/remotes/origin/pr/11178
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/11178#issuecomment-185015502
Looks good !, I have taken a quick look and did not actually ran it. Hoping
the tests will ensure that.
---
If your project is set up for it, you can reply
Github user ScrapCodes commented on a diff in the pull request:
https://github.com/apache/spark/pull/11178#discussion_r53119771
--- Diff: dev/run-tests.py ---
@@ -336,7 +336,6 @@ def build_spark_sbt(hadoop_version):
# Enable all of the profiles for the build
101 - 200 of 696 matches
Mail list logo