GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/9708
[MINOR] [Build] Ignore ensime cache
Using ENSIME, I often have `.ensime_cache` polluting my source tree. This
PR simply adds the cache directory to `.gitignore`
You can merge this pull request
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9126#discussion_r42328222
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -89,9 +89,9 @@ private[hive] case class CurrentDatabase(ctx
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9110#issuecomment-148141041
It works fine on my machine, however maybe someone else should still test
it. (You can just replace `isSnapshot` with `true` in case you don't want to
change
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9111#discussion_r42028440
--- Diff: dev/run-tests.py ---
@@ -176,8 +176,9 @@ def determine_java_version(java_exe):
# find raw version string, eg 'java version "1.8
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/9110
[SPARK-11092] [Docs] Add source links to scaladoc generation
Modify the SBT build script to include GitHub source links for generated
Scaladocs, on releases only (no snapshots).
You can merge
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/9111
[SPARK-11094] Strip extra strings from Java version in test runner
Removes any extra strings from the Java version, fixing subsequent integer
parsing.
This is required since some OpenJDK
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9111#issuecomment-148156305
Ok, I updated it to use a regex
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9128#issuecomment-148438185
In general I agree with you, however since in this case some warnings (such
as deprecations) are not treated as fatal, the user gets a huge amount of
messages
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9128#issuecomment-148557456
Yeah that's it ;)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9126#issuecomment-148570048
Its strange that the python and R test fail, I didn't touch any related
code.
Please note that I pushed a fix during the first build, maybe that somehow
corrupted
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9126#discussion_r42431709
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -89,9 +89,9 @@ private[hive] case class CurrentDatabase(ctx
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9126#issuecomment-148246451
Update: scrolling through the error logs, I found that the `@transient`
errors were also printed as warnings. Going through the sbt build script, I
found that warnings
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9126#issuecomment-148236328
That makes sense, it would also explain why the compiler didn't shout in
2.10. Could it be related to this
[bug](https://issues.scala-lang.org/browse/SI-8813
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/9126
[SPARK-0] [Build] Remove transient annotation for parameters.
`@transient` annotations on class parameters (not case class parameters or
vals) causes compilation errors during compilation
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9126#issuecomment-148236611
Ok I'll update the PR, I'll also check with the scala compiler people.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/9128
[SPARK-11122] Add [warn] tag to fatal warnings
Shows that an error is actually due to a fatal warning.
You can merge this pull request into a Git repository by running:
$ git pull https
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9110#issuecomment-148135705
The ⬠is not a typo, I don't know why it was chosen but it â¬{FILE_PATH}
gets replaced by scaladoc with the corresponding source file.
Btw, I factored out
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9126#issuecomment-148240049
Hmm, actually I saw that some errors were due to a "transient parameter
forwarding" of an existing field in an inherited class. For example
`HiveCo
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9126#discussion_r42157953
--- Diff: core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala
---
@@ -305,7 +305,7 @@ private[netty] class NettyRpcEnvFactory extends
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9126#issuecomment-148473495
Please hold
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9126#discussion_r42157349
--- Diff: core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala
---
@@ -305,7 +305,7 @@ private[netty] class NettyRpcEnvFactory extends
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9126#issuecomment-148476176
You were right about NettyRpcEndpointRef, actually conf is not defined as a
field in the base class.
Sorry I missed this, it is however the case for the listener
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9126#discussion_r42158956
--- Diff: core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala
---
@@ -305,7 +305,7 @@ private[netty] class NettyRpcEnvFactory extends
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9708#issuecomment-157841344
Yeah, it was a matter of minutes ;)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9708#issuecomment-157838855
Hey, I accidentally created this PR from my master branch and just pushed.
Please be careful when merging this.
---
If your project is set up for it, you can reply
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9824#issuecomment-163148987
+1 for treating the errors. How do you usually deal with overlapping pull
requests? Should I just copy your error-handling code manually and mention it?
---
If your
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9824#issuecomment-163349739
@dragos, should be good now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9925#issuecomment-162618258
@rxin, just wanted to check if this PR is acceptable?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9824#issuecomment-162619174
This PR is pretty low priority but still a neat thing to have when
developing.
@vanzin, is this PR acceptable?
---
If your project is set up for it, you can reply
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9925#issuecomment-163708919
I agree that its not pretty, however the only other fix I see is to remove
"$" for columns instead
---
If your project is set up for it, you can reply to
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/10231#discussion_r47274795
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
@@ -842,60 +842,63 @@ private[ml] object RandomForest extends Logging
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/10247#discussion_r47417301
--- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/Row.scala ---
@@ -325,6 +341,14 @@ trait Row extends Serializable {
def getAs[T](i: Int
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/10247#discussion_r47417343
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/rows.scala
---
@@ -211,6 +211,18 @@ class GenericRowWithSchema(values
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10337#issuecomment-165266841
Unfortunatly I now get another, similar error message in spark-shell
```
15/12/16 14:00:48 ERROR NettyRpcEnv: Error downloading stream
/classes/org/apache
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10337#issuecomment-165284577
Sorry to bother you again, there is one more error message that appears.
This can be fixed by changing
`repl/src/main/scala/org/apache/spark/repl
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10337#issuecomment-165292856
I'm running the code snippet from the original JIRA:
```scala
import org.apache.spark.ml.feature.VectorAssembler
val df = sc.parallelize(List((1,2), (3,4
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10247#issuecomment-164596063
Great! Unfortunately I can't help with the tests though.
Check out this
[page](https://cwiki.apache.org/confluence/display/SPARK/Committers#Committers
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10287#issuecomment-164597600
duplicate of #10286
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10337#issuecomment-165570393
Solved for me too now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jodersky closed the pull request at:
https://github.com/apache/spark/pull/9925
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9925#discussion_r45775846
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -83,7 +83,10 @@ package object dsl {
def >= (ot
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9925#issuecomment-159367936
It has the same issue as !==
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9925#discussion_r45783799
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -83,7 +83,10 @@ package object dsl {
def >= (ot
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/9824
[SPARK-11832] [Core] Process arguments in spark-shell for Scala 2.11
Process arguments passed to the spark-shell. Fixes running the spark-shell
from within a build environment.
You can merge
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9824#discussion_r45379122
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -43,10 +39,20 @@ object Main extends Logging {
def main(args
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9925#discussion_r45688714
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -83,7 +83,10 @@ package object dsl {
def >= (ot
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/9925#discussion_r45690718
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -83,7 +83,10 @@ package object dsl {
def >= (ot
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/9925
[SPARK-7286] [SQL] Deprecate !== in favour of =!=
Fixes subtle issues related to operator precedence, as discussed in
SPARK-7286.
I'm not entirely sure this is the right thing to do
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/10711
[SPARK-12761] [CORE] Remove duplicated code
Removes some duplicated code that was reintroduced during a merge.
You can merge this pull request into a Git repository by running:
$ git pull
Github user jodersky commented on the issue:
https://github.com/apache/spark/pull/13061
Last message was a minute too late, so LGTM then.
jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jodersky commented on the issue:
https://github.com/apache/spark/pull/13061
In the short-term we should definitely make the REPL welcome message
consistent. Could you consolidate it with the conflict resolution? I don't
think there are many changes required. However I'm
Github user jodersky commented on the issue:
https://github.com/apache/spark/pull/13061
jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/13061#discussion_r64942108
--- Diff: project/SparkBuild.scala ---
@@ -436,7 +439,19 @@ object SparkBuild extends PomBuild {
else x.settings(Seq[Setting
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/13061#discussion_r65464743
--- Diff: core/src/main/scala/org/apache/spark/package.scala ---
@@ -41,7 +41,53 @@ package org.apache
* level interfaces. These are subject
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/13061#discussion_r65464868
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -103,6 +104,9 @@ object SparkSubmit
Github user jodersky commented on the issue:
https://github.com/apache/spark/pull/13061
Sorry to bother you again and after a so long delay, however I just found
another minor style bug
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/13061#discussion_r65465157
--- Diff: core/src/main/scala/org/apache/spark/package.scala ---
@@ -41,7 +41,53 @@ package org.apache
* level interfaces. These are subject
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/13061#issuecomment-222380888
What I understand from the conversation is that maven assumes the
target/extra-resources directory:
> Unfortunately I'm not an maven expert but I wonder if we
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/10749
[SPARK-12816][SQL] De-alias type when generating schemas
Call `dealias` on local types to fix schema generation for abstract type
members, such as
```scala
type KeyValue = (Int
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/10760
[SPARK-10570][Core] Add version info to json api
Add a new api endpoint `/api/v1/version` to retrieve various version info.
This PR only adds support for finding the current spark version
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10760#issuecomment-171832994
cc @JoshRosen (you mentioned this in the JIRA comment)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10749#issuecomment-171761227
Replaced it with `normalize`. This method is deprecated in Scala 2.11 but
is also available in 2.10. Please retest.
---
If your project is set up for it, you can
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10749#issuecomment-171751294
Oops, `dealias` does not exist in Scala 2.10. I'll see if I can find an
alternative
---
If your project is set up for it, you can reply to this email and have your
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180604528
I rebased on master and ran local tests successfully
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180604643
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/11098#discussion_r52355246
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -516,7 +517,7 @@ private[spark] object Utils extends Logging
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11098#issuecomment-180843740
I can look for more uses of files. This pr was mainly to fix the
deprecation warning caused by the use of process api
On Feb 6, 2016 10:06 AM, "Sean
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180845685
Someone else also reported that problem. Check out the discussion on the
JIRA https://issues.apache.org/jira/browse/SPARK-13171
---
If your project is set up
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11098#issuecomment-181114099
Ack. Do you want me to reword my commits too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11098#issuecomment-181118232
Also, I did a quick search for "process", "lines", and "streams" in the
build output, with no results. It seems that the depreca
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11098#issuecomment-181117691
SPARK-13176 actually problematizes two things:
1. The process API changes and adds deprecations between scala 2.10 and
2.11, the latter not being available
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52230832
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52230982
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52214076
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52214655
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -39,7 +39,7 @@ case class UnsubscribeReceiver
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/1#issuecomment-181538552
Agreed, I also just compared it with the SynchronizedQueue sources and
behaviour should be identical. looks good
---
If your project is set up for it, you can reply
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52214846
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10984#issuecomment-181524146
Looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52213895
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -18,7 +18,7 @@
// scalastyle:off println
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180225221
seems to have been caused by something unrelated, retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180228521
thanks, I didn't know that :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11089#issuecomment-180173250
jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/11085
[SPARK-13171] [Core] Replace future calls with Future
Trivial search-and-replace to eliminate deprecation warnings in Scala 2.11.
Also works with 2.10
You can merge this pull request
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/11089
Replace use of Pairs with Tuple2s
Another trivial deprecation fix for Scala 2.11
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jodersky
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/11098
[SPARK-13176] [Core] Use java.nio.Files where possible
Since Spark requires at least JRE 1.7, it is safe to use built-in
java.nio.Files.
You can merge this pull request into a Git repository
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11098#issuecomment-180620539
jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180556862
yeah 3 fails in a row, I'll take a look at it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180173846
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52237857
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11085#issuecomment-180637276
My local tests didn't include the hive profiles, where the error is
originating from.
Retest this please
---
If your project is set up for it, you can reply
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/3#issuecomment-181612945
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/11001#issuecomment-178239232
Looks good.
Related question @JoshRosen : what's the reason for overriding sbt's
default resolvers rather than appending to them?
---
If your project is set up
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/11379
[SPARK-11011][SQL] Narrow type of UDT serialization
## What changes were proposed in this pull request?
Narrow down the parameter type of `UserDefinedType#serialize()`. Currently
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/11379#discussion_r54185547
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala ---
@@ -50,11 +50,8 @@ abstract class UserDefinedType[UserType
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/11379#discussion_r54185320
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/UserDefinedType.scala ---
@@ -50,11 +50,8 @@ abstract class UserDefinedType[UserType
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10760#issuecomment-189043337
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9925#issuecomment-189042474
Is this still under consideration? I have no problem closing the issue if
you think it's not a good enough solution.
---
If your project is set up for it, you can
GitHub user jodersky reopened a pull request:
https://github.com/apache/spark/pull/9925
[SPARK-7286] [SQL] Deprecate !== in favour of =!=
Fixes subtle issues related to operator precedence, as discussed in
SPARK-7286.
I'm not entirely sure this is the right thing to do
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/9925#issuecomment-172996578
Since the next version of Spark is probably going to be a major release,
would now not be a good time to consider renaming the operator as to achieve
correct behavior
GitHub user jodersky opened a pull request:
https://github.com/apache/spark/pull/10903
Fix fatal warnings due to @unnecessary transient annotations
A recent merge reintroduced some unnecessary @transient annotations, thus
generating fatal warnings in the Scala 2.11 build
1 - 100 of 278 matches
Mail list logo