This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.2 by this push:
new 0d60cb5 [SPARK-36132][SS][SQL] Support
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new efcce23 [SPARK-36132][SS][SQL] Support initial
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new dfd7b02 [SPARK-35800][SS] Improving GroupState
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new e62d247 [SPARK-32585][SQL] Support scala
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-2.4 by this push:
new c82b6e4 [SPARK-32794][SS] Fixed rare
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.0 by this push:
new e632e7c [SPARK-32794][SS] Fixed rare
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new e4237bb [SPARK-32794][SS] Fixed rare corner case
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new cbb714f [SPARK-29438][SS] Use partition ID
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new d2bca8f [SPARK-30609] Allow default merge command
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-2.4 by this push:
new df9a506 [SPARK-27453] Pass partitionBy
This is an automated email from the ASF dual-hosted git repository.
tdas pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 26ed65f [SPARK-27453] Pass partitionBy as options
tch
- Multiple watermark policy
- The semantics of what changes are allowed to the streaming between restarts.
## How was this patch tested?
No tests
Closes #22627 from tdas/SPARK-25639.
Authored-by: Tathagata Das
Signed-off-by: Tathagata Das
(cherry picked from com
ple watermark policy
- The semantics of what changes are allowed to the streaming between restarts.
## How was this patch tested?
No tests
Closes #22627 from tdas/SPARK-25639.
Authored-by: Tathagata Das
Signed-off-by: Tathagata Das
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Com
Repository: spark
Updated Branches:
refs/heads/branch-2.4 3a6ef8b7e -> 0dbf1450f
[SPARK-25399][SS] Continuous processing state should not affect microbatch
execution jobs
## What changes were proposed in this pull request?
The leftover state from running a continuous processing streaming
Repository: spark
Updated Branches:
refs/heads/master 97d4afaa1 -> 9f5c5b4cc
[SPARK-25399][SS] Continuous processing state should not affect microbatch
execution jobs
## What changes were proposed in this pull request?
The leftover state from running a continuous processing streaming job
Repository: spark
Updated Branches:
refs/heads/master a9aacdf1c -> 8ed044928
[SPARK-25204][SS] Fix race in rate source test.
## What changes were proposed in this pull request?
Fix a race in the rate source tests. We need a better way of testing restart
behavior.
## How was this patch
n is also
avoided.
## How was this patch tested?
Ran locally many times.
Closes #22182 from tdas/SPARK-25184.
Authored-by: Tathagata Das
Signed-off-by: Tathagata Das
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/31063249
Tr
[error] one error found
[error] Compile failed at Aug 21, 2018 4:04:35 PM [12.827s]
```
## How was this patch tested?
It compiles!
Closes #22175 from tdas/fix-build.
Authored-by: Tathagata Das
Signed-off-by: Tathagata Das
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: htt
Repository: spark
Updated Branches:
refs/heads/master ac0174e55 -> 42035a4fe
[SPARK-24441][SS] Expose total estimated size of states in
HDFSBackedStateStoreProvider
## What changes were proposed in this pull request?
This patch exposes the estimation of size of cache (loadedMaps) in
Repository: spark
Updated Branches:
refs/heads/master 72ecfd095 -> 6c5cb8585
[SPARK-24763][SS] Remove redundant key data from value in streaming aggregation
## What changes were proposed in this pull request?
This patch proposes a new flag option for stateful aggregation: remove
redundant
red-by: Tathagata Das
Co-authored-by: c-horn
Author: Tathagata Das
Closes #21746 from tdas/SPARK-24699.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/61f0ca4f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/61f0ca4f
Diff: h
citly testing
obj-to-row conversion for both state formats.
Author: Tathagata Das
Closes #21739 from tdas/SPARK-22187-1.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b3d88ac0
Tree: http://git-wip-us.apache.org/repos/a
Repository: spark
Updated Branches:
refs/heads/master d05a926e7 -> 8b7d4f842
[SPARK-24717][SS] Split out max retain version of state for memory in
HDFSBackedStateStoreProvider
## What changes were proposed in this pull request?
This patch proposes breaking down configuration of retaining
ing the `committedOffsets`.
## How was this patch tested?
Added new tests
Author: Tathagata Das
Closes #21744 from tdas/SPARK-24697.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ff7f6ef7
Tree: http://git-wip-us.apache.
Add a test for recovery from existing checkpoints.
## How was this patch tested?
New unit test
Author: Tathagata Das
Closes #21701 from tdas/SPARK-24730.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6078b891
Tree: http://git-
Repository: spark
Updated Branches:
refs/heads/master e0559f238 -> 32cb50835
[SPARK-24662][SQL][SS] Support limit in structured streaming
## What changes were proposed in this pull request?
Support the LIMIT operator in structured streaming.
For streams in append or complete output mode, a
Repository: spark
Updated Branches:
refs/heads/master 2224861f2 -> f6e6899a8
[SPARK-24386][SS] coalesce(1) aggregates in continuous processing
## What changes were proposed in this pull request?
Provide a continuous processing implementation of coalesce(1), as well as
allowing aggregates on
Das
Closes #21477 from tdas/SPARK-24396.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b5ccf0d3
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b5ccf0d3
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b5ccf
lag appropriately.
## How was this patch tested?
new unit test
Author: Tathagata Das
Closes #21491 from tdas/SPARK-24453.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2c2a86b5
Tree: http://git-wip-us.apache.org/repos/
ext. It mirrors the Java TaskContext API of returning a string value if
the key exists, or None if the key does not exist.
## How was this patch tested?
New test added.
Author: Tathagata Das
Closes #21437 from tdas/SPARK-24397.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: h
Repository: spark
Updated Branches:
refs/heads/master 53c06ddab -> 0fd68cb72
[SPARK-24234][SS] Support multiple row writers in continuous processing shuffle
reader.
## What changes were proposed in this pull request?
Repository: spark
Updated Branches:
refs/heads/master b7a036b75 -> f45793329
[SPARK-23416][SS] Add a specific stop method for ContinuousExecution.
## What changes were proposed in this pull request?
Add a specific stop method for ContinuousExecution. The previous
StreamExecution.stop()
Repository: spark
Updated Branches:
refs/heads/master 03e90f65b -> a33dcf4a0
[SPARK-24234][SS] Reader for continuous processing shuffle
## What changes were proposed in this pull request?
Read RDD for continuous processing shuffle, as well as the initial RPC-based
row receiver.
Repository: spark
Updated Branches:
refs/heads/master 710e4e81a -> 434d74e33
[SPARK-23503][SS] Enforce sequencing of committed epochs for Continuous
Execution
## What changes were proposed in this pull request?
Made changes to EpochCoordinator so that it enforces a commit order. In case a
nup. This PR enables it for stream-stream joins.
## How was this patch tested?
- Updated join tests. Additionally, updated them to not use `CheckLastBatch`
anywhere to set good precedence for future.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #21253 from tdas/SPARK-24158.
ation
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #21220 from tdas/SPARK-24157.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/47b5b685
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/47b5b685
D
Repository: spark
Updated Branches:
refs/heads/master d04806a23 -> af4dc5028
[SPARK-24039][SS] Do continuous processing writes with multiple compute() calls
## What changes were proposed in this pull request?
Do continuous processing writes with multiple compute() calls.
The current
ion is running. Great for
debugging production issues.
## How was this patch tested?
Not necessary.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #21160 from tdas/SPARK-24094.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/
ath where the
execution-metrics-to-source mapping is done directly. Otherwise we fall back to
existing mapping logic.
## How was this patch tested?
- New unit tests using V2 memory source
- Existing unit tests using V1 source
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #21126
Repository: spark
Updated Branches:
refs/heads/master 7b1e6523a -> d6c26d1c9
[SPARK-24038][SS] Refactor continuous writing to its own class
## What changes were proposed in this pull request?
Refactor continuous writing to its own class.
See WIP https://github.com/jose-torres/spark/pull/13
ing the KafkaOffsetReader.
## How was this patch tested?
Existing unit tests
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #21134 from tdas/SPARK-24056.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7b1e6523
Tree: http:
t;tathagata.das1...@gmail.com>
Closes #21124 from tdas/SPARK-23004.
(cherry picked from commit 770add81c3474e754867d7105031a5eaf27159bd)
Signed-off-by: Tathagata Das <tathagata.das1...@gmail.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.o
t;tathagata.das1...@gmail.com>
Closes #21124 from tdas/SPARK-23004.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/770add81
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/770add81
Diff: http://git-wip-us.apache.org/r
Repository: spark
Updated Branches:
refs/heads/master 1cc66a072 -> 05ae74778
[SPARK-23747][STRUCTURED STREAMING] Add EpochCoordinator unit tests
## What changes were proposed in this pull request?
Unit tests for EpochCoordinator that test correct sequencing of committed
epochs. Several
Repository: spark
Updated Branches:
refs/heads/master 14844a62c -> 1cc66a072
[SPARK-23687][SS] Add a memory source for continuous processing.
## What changes were proposed in this pull request?
Add a memory source for continuous processing.
Note that only one of the ContinuousSuite tests is
all `close()` to successfully write the
file, or `cancel()` in case of an error.
## How was this patch tested?
New tests in `CheckpointFileManagerSuite` and slightly modified existing tests.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #21048 from tdas/SPARK-23966.
Proj
Repository: spark
Updated Branches:
refs/heads/branch-2.3 908c681c6 -> 2995b79d6
[SPARK-23748][SS] Fix SS continuous process doesn't support SubqueryAlias issue
## What changes were proposed in this pull request?
Current SS continuous doesn't support processing on temp table or
Repository: spark
Updated Branches:
refs/heads/master 682002b6d -> 14291b061
[SPARK-23748][SS] Fix SS continuous process doesn't support SubqueryAlias issue
## What changes were proposed in this pull request?
Current SS continuous doesn't support processing on temp table or
`df.as("xxx")`,
Repository: spark
Updated Branches:
refs/heads/master 7cf9fab33 -> 66a3a5a2d
[SPARK-23099][SS] Migrate foreach sink to DataSourceV2
## What changes were proposed in this pull request?
Migrate foreach sink to DataSourceV2.
Since the previous attempt at this PR #20552, we've changed and
dant.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20941 from tdas/SPARK-23827.
(cherry picked from commit 15298b99ac8944e781328423289586176cf824d7)
Signed-off-by: Tathagata Das <tathagata.das1...@gmail.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/re
Repository: spark
Updated Branches:
refs/heads/master 35997b59f -> c68ec4e6a
[SPARK-23096][SS] Migrate rate source to V2
## What changes were proposed in this pull request?
This PR migrate micro batch rate source to V2 API and rewrite UTs to suite V2
test.
## How was this patch tested?
Relation [value#66]
+- Project [(value#12 % 5) AS key#9, value#12]
+- Project [value#66 AS value#12]// solution: project with aliases
+- LocalRelation [value#66]
```
## How was this patch tested?
New unit test
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #207
Repository: spark
Updated Branches:
refs/heads/master ba622f45c -> b0f422c38
[SPARK-23559][SS] Add epoch ID to DataWriterFactory.
## What changes were proposed in this pull request?
Add an epoch ID argument to DataWriterFactory for use in streaming. As a side
effect of passing in this
Repository: spark
Updated Branches:
refs/heads/master 3a4d15e5d -> 707e6506d
[SPARK-23097][SQL][SS] Migrate text socket source to V2
## What changes were proposed in this pull request?
This PR moves structured streaming text socket source to V2.
Questions: do we need to remove old "socket"
Repository: spark
Updated Branches:
refs/heads/master 185f5bc7d -> 7ec83658f
[SPARK-23491][SS] Remove explicit job cancellation from ContinuousExecution
reconfiguring
## What changes were proposed in this pull request?
Remove queryExecutionThread.interrupt() from ContinuousExecution. As
ing sure that the flaky tests are deterministic.
## How was this patch tested?
Modified test cases in `Streaming*JoinSuites` where there are consecutive
`AddData` actions.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20650 from tdas/SPARK-23408.
Project: http:
rom multiple threads - the query thread at the time of reader factory
creation, and the epoch tracking thread at the time of `needsReconfiguration`.
## How was this patch tested?
Existing tests.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20655 from tdas/SPARK-23484.
rom multiple threads - the query thread at the time of reader factory
creation, and the epoch tracking thread at the time of `needsReconfiguration`.
## How was this patch tested?
Existing tests.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20655 from tdas/SPARK-23484.
Proj
ove this)
Please review http://spark.apache.org/contributing.html before opening a pull
request.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20631 from tdas/SPARK-23454.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spa
ove this)
Please review http://spark.apache.org/contributing.html before opening a pull
request.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20631 from tdas/SPARK-23454.
(cherry picked from commit 601d653bff9160db8477f86d961e609fc2190237)
Signed-off-by: Tathagata Das <ta
Repository: spark
Updated Branches:
refs/heads/master c5857e496 -> 0a73aa31f
http://git-wip-us.apache.org/repos/asf/spark/blob/0a73aa31/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSourceSuite.scala
throughput of V1
and V2 using 20M records in a single partition. They were comparable.
## How was this patch tested?
Existing tests, few modified to be better tests than the existing ones.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20554 from tdas/SPARK-23362.
Project: http://g
key#9, value#12]
+- Project [value#66 AS value#12]// solution: project with aliases
+- LocalRelation [value#66]
```
## How was this patch tested?
New unit test
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20598 from tdas/SPARK-23406.
Project: http://git
sts.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Author: Burak Yavuz <brk...@gmail.com>
Closes #20445 from tdas/SPARK-23092.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/30295bf5
Tree: http://git-wip
How was this patch tested?
N/A
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20494 from tdas/SPARK-23064-2.
(cherry picked from commit eaf35de2471fac4337dd2920026836d52b1ec847)
Signed-off-by: Tathagata Das <tathagata.das1...@gmail.com>
Project: http://git-wip-us.apache.org/repos
tch tested?
N/A
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20494 from tdas/SPARK-23064-2.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/eaf35de2
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree
tch tested?
Multiple rounds of tests.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20371 from tdas/SPARK-23197.
(cherry picked from commit 15adcc8273e73352e5e1c3fc9915c0b004ec4836)
Signed-off-by: Tathagata Das <tathagata.das1...@gmail.com>
Project: http://git-wip-us.apach
ted?
Multiple rounds of tests.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20371 from tdas/SPARK-23197.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/15adcc82
Tree: http://git-wip-us.apache.org/repos/
fde-75992cc517bd.png)
## How was this patch tested?
N/A
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20308 from tdas/SPARK-23142.
(cherry picked from commit 4cd2ecc0c7222fef1337e04f1948333296c3be86)
Signed-off-by: Tathagata Das <tathagata.das1...@gmail.com>
Project:
fde-75992cc517bd.png)
## How was this patch tested?
N/A
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20308 from tdas/SPARK-23142.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/4cd2ecc0
Tree: http:
How was this patch tested?
new unit test
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20311 from tdas/SPARK-23144.
(cherry picked from commit bf34d665b9c865e00fac7001500bf6d521c2dff9)
Signed-off-by: Tathagata Das <tathagata.das1...@gmail.com>
Project: http://git-wip-us.a
How was this patch tested?
new unit test
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #20311 from tdas/SPARK-23144.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bf34d665
Tree: http://git-wip-us.apache.org/repos/
Das <tathagata.das1...@gmail.com>
Closes #20309 from tdas/SPARK-23143.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2d41f040
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2d41f040
Diff: http://git-wip-us.a
Repository: spark
Updated Branches:
refs/heads/branch-2.3 3a80cc59b -> 2a87c3a77
[SPARK-23052][SS] Migrate ConsoleSink to data source V2 api.
## What changes were proposed in this pull request?
Migrate ConsoleSink to data source V2 api.
Note that this includes a missing piece in
Repository: spark
Updated Branches:
refs/heads/master 39d244d92 -> 1c76a91e5
[SPARK-23052][SS] Migrate ConsoleSink to data source V2 api.
## What changes were proposed in this pull request?
Migrate ConsoleSink to data source V2 api.
Note that this includes a missing piece in
Repository: spark
Updated Branches:
refs/heads/branch-2.3 1a6dfaf25 -> dbd2a5566
[SPARK-23033][SS] Don't use task level retry for continuous processing
## What changes were proposed in this pull request?
Continuous processing tasks will fail on any attempt number greater than 0.
Repository: spark
Updated Branches:
refs/heads/master c132538a1 -> 86a845031
[SPARK-23033][SS] Don't use task level retry for continuous processing
## What changes were proposed in this pull request?
Continuous processing tasks will fail on any attempt number greater than 0.
[SPARK-22908][SS] Roll forward continuous processing Kafka support with fix to
continuous Kafka data reader
## What changes were proposed in this pull request?
The Kafka reader is now interruptible and can close itself.
## How was this patch tested?
I locally ran one of the
Repository: spark
Updated Branches:
refs/heads/branch-2.3 08252bb38 -> 0a441d2ed
http://git-wip-us.apache.org/repos/asf/spark/blob/0a441d2e/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSourceSuite.scala
[SPARK-22908][SS] Roll forward continuous processing Kafka support with fix to
continuous Kafka data reader
## What changes were proposed in this pull request?
The Kafka reader is now interruptible and can close itself.
## How was this patch tested?
I locally ran one of the
Repository: spark
Updated Branches:
refs/heads/master a9b845ebb -> 166705785
http://git-wip-us.apache.org/repos/asf/spark/blob/16670578/external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSourceSuite.scala
Repository: spark
Updated Branches:
refs/heads/branch-2.3 b94debd2b -> f891ee324
http://git-wip-us.apache.org/repos/asf/spark/blob/f891ee32/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
--
diff --git
[SPARK-22908] Add kafka source and sink for continuous processing.
## What changes were proposed in this pull request?
Add kafka source and sink for continuous processing. This involves two small
changes to the execution engine:
* Bring data reader close() into the normal data reader thread to
[SPARK-22908] Add kafka source and sink for continuous processing.
## What changes were proposed in this pull request?
Add kafka source and sink for continuous processing. This involves two small
changes to the execution engine:
* Bring data reader close() into the normal data reader thread to
Repository: spark
Updated Branches:
refs/heads/master 0b2eefb67 -> 6f7aaed80
http://git-wip-us.apache.org/repos/asf/spark/blob/6f7aaed8/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/branch-2.3 be5991902 -> 44763d93c
[SPARK-22912] v2 data source support in MicroBatchExecution
## What changes were proposed in this pull request?
Support for v2 data sources in microbatch streaming.
## How was this patch tested?
A very basic
Repository: spark
Updated Branches:
refs/heads/master eed82a0b2 -> 4f7e75883
[SPARK-22912] v2 data source support in MicroBatchExecution
## What changes were proposed in this pull request?
Support for v2 data sources in microbatch streaming.
## How was this patch tested?
A very basic new
Das <tathagata.das1...@gmail.com>
Closes #19495 from tdas/SPARK-22278.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f3137fee
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f3137fee
Diff: http://git-wip-us.apache.org/repos/
Repository: spark
Updated Branches:
refs/heads/master e1960c3d6 -> 75d666b95
[SPARK-22136][SS] Evaluate one-sided conditions early in stream-stream joins.
## What changes were proposed in this pull request?
Evaluate one-sided conditions early in stream-stream joins.
This is in addition to
Repository: spark
Updated Branches:
refs/heads/master 014dc8471 -> e8547ffb4
[SPARK-22238] Fix plan resolution bug caused by EnsureStatefulOpPartitioning
## What changes were proposed in this pull request?
In EnsureStatefulOpPartitioning, we check that the inputRDD to a SparkPlan has
the
ll, and avoid these
confusing corner cases.
## How was this patch tested?
Refactored tests.
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #19416 from tdas/SPARK-22187.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/sp
Repository: spark
Updated Branches:
refs/heads/master 5f6943345 -> 3099c574c
http://git-wip-us.apache.org/repos/asf/spark/blob/3099c574/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala
--
diff
[SPARK-22136][SS] Implement stream-stream outer joins.
## What changes were proposed in this pull request?
Allow one-sided outer joins between two streams when a watermark is defined.
## How was this patch tested?
new unit tests
Author: Jose Torres
Closes #19327 from
evented stream-stream join on an empty batch dataframe to be collapsed by
the optimizer
## How was this patch tested?
- New tests in StreamingJoinSuite
- Updated tests UnsupportedOperationSuite
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #19271 from tdas/SPARK-22053.
Proje
Repository: spark
Updated Branches:
refs/heads/master a8a5cd24e -> f32a84250
http://git-wip-us.apache.org/repos/asf/spark/blob/f32a8425/sql/core/src/test/scala/org/apache/spark/sql/streaming/StateStoreMetricsTest.scala
--
diff
Repository: spark
Updated Branches:
refs/heads/master c7307acda -> 0bad10d3e
[SPARK-22017] Take minimum of all watermark execs in StreamExecution.
## What changes were proposed in this pull request?
Take the minimum of all watermark exec nodes as the "real" watermark in
StreamExecution,
ing the metadata of top-level aliases.
## How was this patch tested?
New unit test
Author: Tathagata Das <tathagata.das1...@gmail.com>
Closes #19240 from tdas/SPARK-22018.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8866
Repository: spark
Updated Branches:
refs/heads/master 36b48ee6e -> acdf45fb5
[SPARK-21765] Check that optimization doesn't affect isStreaming bit.
## What changes were proposed in this pull request?
Add an assert in logical plan optimization that the isStreaming bit stays the
same, and fix
Repository: spark
Updated Branches:
refs/heads/branch-2.2 fb1b5f08a -> 1f7c4869b
[SPARK-21925] Update trigger interval documentation in docs with behavior
change in Spark 2.2
Forgot to update docs with behavior change.
Author: Burak Yavuz
Closes #19138 from
Repository: spark
Updated Branches:
refs/heads/master 2974406d1 -> 8c954d2cd
[SPARK-21925] Update trigger interval documentation in docs with behavior
change in Spark 2.2
Forgot to update docs with behavior change.
Author: Burak Yavuz
Closes #19138 from
1 - 100 of 847 matches
Mail list logo