Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/22282#discussion_r214073971
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaWriteTask.scala
---
@@ -131,9 +158,25 @@ private[kafka010] abstract
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21955
Thanks for the follow-up.
lgtm
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
I used the following command and the test passed:
mvn test -Phadoop-2.6 -Pyarn -Phive -Dtest=KafkaMicroBatchSourceSuite -rf
external/kafka-0-10-sql
Please take a look at the '
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
@zsxwing
Is there anything I should do for this PR ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
```
22:36:05.028 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0
in stage 16314.0 (TID 39181, localhost, executor driver):
java.io.FileNotFoundException: File
file:/home/jenkins
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Ryan:
Thanks for the close follow-up.
Once Kafka 2.0.0 is released, I will incorporate the above.
---
-
To
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
@zsxwing
Is there anything that needs to be done from my side ?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Test failure was in Hive test, not related to this PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Thanks for the reminder, @ijuma
Updated pom.xml and title accordingly.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Ryan:
Thanks for the reminder.
I have disabled that test.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Pulled in your commits.
Will look at test failures.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Not sure what to do with the following build error which is not caused by
the PR:
```
[ERROR]
/spark/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/21488#discussion_r203106522
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaOffsetReader.scala
---
@@ -115,7 +116,7 @@ private[kafka010] class
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
@ijuma
Sorry for the late response. 9 days ago I was in China where access to
gmail is intermittent.
---
-
To unsubscribe, e
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
w.r.t. stable Kafka release, it seems 2.0.0 RC2 would pass:
http://search-hadoop.com/m/Kafka/uyzND1ClBEezundG1?subj=Re+VOTE+2+0+0+RC2
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21700
Please publish the above results to the thread where you requested review
from committers.
---
-
To unsubscribe, e-mail: reviews
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/21700#discussion_r200248889
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -240,7 +244,11 @@ private[state
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/21700#discussion_r200242174
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/streaming/state/BoundedSortedMap.java
---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/21700#discussion_r200242876
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/streaming/state/BoundedSortedMap.java
---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21651
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21651
cc @tdas
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Located the test output:
```
-rw-r--r-- 1 hbase hadoop 35335485506 Jun 13 20:36 target/unit-tests.log
```
Still need to find out cause for assertion failure
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Made some progress in testing.
Now facing:
```
- assign from latest offsets (failOnDataLoss: true) *** FAILED ***
java.lang.IllegalArgumentException: requirement failed
at
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
There is only
target/surefire-reports/TEST-org.apache.spark.sql.kafka010.KafkaMicroBatchV2SourceSuite.xml
under target/surefire-reports
That file doesn't contain test o
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
I tried the following change but didn't seem to get more output from Kafka:
```
diff --git a/external/kafka-0-10-sql/src/test/resources/log4j.properties
b/external/kafka-0-10-sql/src
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21488
Currently I am trying to get test suite pass first.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/21488#discussion_r192601997
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaTestUtils.scala
---
@@ -96,10 +101,13 @@ class KafkaTestUtils
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/21488
SPARK-18057 Update structured streaming kafka from 0.10.0.1 to 2.0.0
## What changes were proposed in this pull request?
This PR upgrades to the Kafka 2.0.0 release where KIP-266 is
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/20490#discussion_r184736836
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
---
@@ -116,21 +118,44 @@ object
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21124
+1
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/21109
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/20767
Interesting.
https://commons.apache.org/proper/commons-pool/apidocs/org/apache/commons/pool2/impl/BaseGenericObjectPool.html#getBorrowedCount
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/20767
I did a quick search for 'apache commons pool metrics' which didn't show up
directly related links.
---
-
To unsu
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/20767
@tdas
Do you think a follow on JIRA can be logged for adding metrics for the
cache operations ?
Thanks
---
-
To
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/20767#discussion_r174984237
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala
---
@@ -467,44 +435,58 @@ private[kafka010] object
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/20767#discussion_r173636109
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala
---
@@ -342,80 +415,103 @@ private[kafka010] object
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/20767#discussion_r173636002
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaDataConsumer.scala
---
@@ -342,80 +415,103 @@ private[kafka010] object
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/14568
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
I don't think so.
Using (id & 8589934591) would obtain the numbers 99 and 199 in my example.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
Can you elaborate ?
1st run: Id's 1 to 99 are generated.
2nd run: poll Id column and obtain 99. Specify 100 as offset for
monotonically_increasing_id(). Id's 100 to 199 are
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
As Herman commented above, obtaining lower 33 bits of the id column would
allow Ids generated from two (or more) executions to form contiguous range.
---
If your project is set up for it, you can
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
@hvanhovell
Let me know if there is more I should do for this enhancement.
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14568#discussion_r75151883
--- Diff: python/pyspark/sql/functions.py ---
@@ -426,6 +426,29 @@ def monotonically_increasing_id():
return Column(sc
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
The addition of offset support allows users to concatenate rows from
different datasets.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
With:
spark.range(0, 9, 1, 3).select(monotonically_increasing_id()).show
I got:
```
+-+
|monotonically_increasing_id
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
@hvanhovell
What do you think of the above reply ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
@hvanhovell
As Martin said in JIRA:
* Add the index column to A' - this time starting at 200, as there are
already entries with id's from 0 to 199 (here, monotonicallyInreas
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
@rxin
Can you take a look at the python API one more time ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
```
/home/jenkins/workspace/SparkPullRequestBuilder/dev/mima: line 37: 40498
Aborted (core dumped) java -XX:MaxPermSize=1g -Xmx2g -cp
"$TOOLS_CLASSPATH:$OLD_DEPS_CLAS
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14568#discussion_r74710318
--- Diff: python/pyspark/sql/functions.py ---
@@ -426,6 +426,29 @@ def monotonically_increasing_id():
return Column(sc
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14568#discussion_r74710145
--- Diff: python/pyspark/sql/functions.py ---
@@ -426,6 +426,29 @@ def monotonically_increasing_id():
return Column(sc
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
@hvanhovell @rxin :
Is there any other comment I should address ?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14568#discussion_r74484517
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/MonotonicallyIncreasingID.scala
---
@@ -81,3 +93,12 @@ case class
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14568#discussion_r74460557
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/MonotonicallyIncreasingID.scala
---
@@ -81,3 +93,12 @@ case class
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
```
16/08/10 15:35:12 DEBUG HiveSessionState$$anon$1:
=== Result of Batch Resolution ===
!'DeserializeToObject
unresolveddeserializer(createexternalrow(getcolumnbyordinal(0, Lon
Github user tedyu commented on the issue:
https://github.com/apache/spark/pull/14568
```
[info] - monotonically_increasing_id_with_offset *** FAILED *** (14
milliseconds)
[info] org.apache.spark.sql.AnalysisException: Invalid number of
arguments for function
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14568#discussion_r74138850
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/MonotonicallyIncreasingID.scala
---
@@ -40,13 +40,14 @@ import
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/14568
SPARK-10868 monotonicallyIncreasingId() supports offset for indexing
## What changes were proposed in this pull request?
This PR adds offset to monotonicallyIncreasingId()
## How
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13983#discussion_r69842071
--- Diff:
common/unsafe/src/test/java/org/apache/spark/unsafe/PlatformUtilSuite.java ---
@@ -58,4 +61,17 @@ public void overlappingCopyMemory
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13829#discussion_r69213184
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/codegen/BufferHolderSuite.scala
---
@@ -0,0 +1,39 @@
+/*
+ * Licensed
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13718#discussion_r67971887
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -120,7 +120,13 @@ class FileStreamSource
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13718#discussion_r67970641
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
---
@@ -120,7 +120,13 @@ class FileStreamSource
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13473#discussion_r65649617
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -403,6 +403,17 @@ private[spark] class BlockManager
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13473#discussion_r65648116
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -403,6 +403,17 @@ private[spark] class BlockManager
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13473#discussion_r65648040
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -403,6 +403,17 @@ private[spark] class BlockManager
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13283#discussion_r65629823
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -38,6 +40,16 @@ private[sql] class ResolveDataSource
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13283#discussion_r65622737
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/rules.scala
---
@@ -38,6 +40,16 @@ private[sql] class ResolveDataSource
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13160#discussion_r64677095
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -771,7 +777,11 @@ object SparkSession {
val sparkConf
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13212#discussion_r64405314
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -774,13 +774,42 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10125#issuecomment-220772184
When would the addendum be checked in ?
For people using Java 7, it is inconvenient because they have to modify
Decimal.scala otherwise the compilation would
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-220734121
@srowen :
Is this ready to go in ?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10125#issuecomment-220720146
See #13233
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10125#issuecomment-220716220
Looks like bigintval.longValue() should have been used.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10125#issuecomment-22071
This seems to have broken build for Java 7:
```
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala:137:
value longValueExact is not a member of
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-220588258
@srowen
Gentle ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-220256566
@srowen
See if I have addressed all your comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13057#discussion_r63563275
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -334,6 +334,18 @@ static void addPermGenSizeOpt(List cmd
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13057#discussion_r63560321
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -334,6 +334,18 @@ static void addPermGenSizeOpt(List cmd
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13057#discussion_r63550916
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -334,6 +334,18 @@ static void addPermGenSizeOpt(List cmd
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/12952#discussion_r63439615
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -163,15 +164,17 @@ object EliminateSerialization extends
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-219532924
@srowen
Pardon for the ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-219308231
@srowen
Gentle ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-219050166
@srowen
I think I have addressed your comments.
Cheers
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13057#discussion_r63156623
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -334,6 +334,18 @@ static void addPermGenSizeOpt(List cmd
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-218991503
@srowen
Mind taking another look ?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/13057#discussion_r63044248
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/CommandBuilderUtils.java ---
@@ -334,6 +334,18 @@ static void addPermGenSizeOpt(List cmd
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/13057#issuecomment-218781776
@srowen
Can you take another look ?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/13057
YarnSparkHadoopUtil#getOutOfMemoryErrorArgument should respect
OnOutOfMemoryError parameter given by user
## What changes were proposed in this pull request?
As Nirav reported in this
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/12777#discussion_r62393964
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFilters.scala ---
@@ -56,29 +55,35 @@ import org.apache.spark.sql.sources._
* known
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/10995#issuecomment-217514735
@zsxwing @JoshRosen @srowen
Mind taking another look ?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/12915#discussion_r62265808
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -330,19 +334,21 @@ class SessionCatalog
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/12830#discussion_r61987501
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -635,6 +642,122 @@ class SparkSession private(
object SparkSession
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/12865#discussion_r61946906
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -394,7 +394,7 @@ private[spark] class TaskSchedulerImpl
Github user tedyu closed the pull request at:
https://github.com/apache/spark/pull/12814
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/12830#discussion_r61821150
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -635,6 +642,122 @@ class SparkSession private(
object SparkSession
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/12814#issuecomment-215974354
```
sbt.ForkMain$ForkError: java.lang.AssertionError:
expected:<0.9986422261219262> but was:<0.9986422261219272>
at org.junit.Assert.fail(As
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/12814#discussion_r61670775
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java
---
@@ -338,9 +338,10 @@ public UnsafeArrayData copy
GitHub user tedyu opened a pull request:
https://github.com/apache/spark/pull/12814
[SPARK-14850] Show limit for array size when array is too big
## What changes were proposed in this pull request?
This PR shows the size of array and the limit when array is too big
1 - 100 of 510 matches
Mail list logo