Repository: spark
Updated Branches:
refs/heads/master e537b33c6 - c6889d2cb
[HOTFIX][Streaming] Handle port collisions in flume polling test
This is failing my tests in #1777. @tdas
Author: Andrew Or andrewo...@gmail.com
Closes #1803 from andrewor14/fix-flaky-streaming-test and squashes
Repository: spark
Updated Branches:
refs/heads/master bfa09b01d - 99243288b
[SPARK-1981] updated streaming-kinesis.md
fixed markup, separated out sections more-clearly, more thorough explanations
Author: Chris Fregly ch...@fregly.com
Closes #1757 from cfregly/master and squashes the
Repository: spark
Updated Branches:
refs/heads/branch-1.1 4f776dfab - 826356725
[SPARK-1981] updated streaming-kinesis.md
fixed markup, separated out sections more-clearly, more thorough explanations
Author: Chris Fregly ch...@fregly.com
Closes #1757 from cfregly/master and squashes the
Repository: spark
Updated Branches:
refs/heads/master 99243288b - 95470a03a
[HOTFIX][STREAMING] Allow the JVM/Netty to decide which port to bind to in
Flume Polling Tests.
Author: Hari Shreedharan harishreedha...@gmail.com
Closes #1820 from harishreedharan/use-free-ports and squashes the
``streaming``
- Rewrote accuracy and convergence tests to use ``setupStreams`` and
``runStreams``
- Added new test for the accuracy of predictions generated by ``predictOnValue``
These tests should run faster, be easier to extend/maintain, and provide a
reference for new tests.
mengxr tdas
Author
``streaming``
- Rewrote accuracy and convergence tests to use ``setupStreams`` and
``runStreams``
- Added new test for the accuracy of predictions generated by ``predictOnValue``
These tests should run faster, be easier to extend/maintain, and provide a
reference for new tests.
mengxr tdas
Repository: spark
Updated Branches:
refs/heads/branch-1.1 5b22ebf68 - 9b2909955
[SPARK-3054][STREAMING] Add unit tests for Spark Sink.
This patch adds unit tests for Spark Sink.
It also removes the private[flume] for Spark Sink,
since the sink is instantiated from Flume configuration (looks
Repository: spark
Updated Branches:
refs/heads/branch-1.1 eba399b3c - 44856654c
[HOTFIX][Streaming] Handle port collisions in flume polling test
This is failing my tests in #1777. @tdas
Author: Andrew Or andrewo...@gmail.com
Closes #1803 from andrewor14/fix-flaky-streaming-test and squashes
Repository: spark
Updated Branches:
refs/heads/master 220f41368 - cd30db566
SPARK-2798 [BUILD] Correct several small errors in Flume module pom.xml files
(EDIT) Since the scalatest issue was since resolved, this is now about a few
small problems in the Flume Sink `pom.xml`
- `scalatest` is
Repository: spark
Updated Branches:
refs/heads/master aa7de128c - e9bb12bea
[SPARK-1981][Streaming][Hotfix] Fixed docs related to kinesis
- Include kinesis in the unidocs
- Hide non-public classes from docs
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #2239 from tdas/kinesis-doc
Repository: spark
Updated Branches:
refs/heads/branch-1.1 9b0cff2d4 - 0c8183cb3
[SPARK-1981][Streaming][Hotfix] Fixed docs related to kinesis
- Include kinesis in the unidocs
- Hide non-public classes from docs
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #2239 from tdas/kinesis
tathagata.das1...@gmail.com
Author: Jacek Laskowski ja...@japila.pl
Closes #2254 from tdas/streaming-doc-fix and squashes the following commits:
e45c6d7 [Jacek Laskowski] More fixes from an old PR
5125316 [Tathagata Das] Fixed links
dc02f26 [Tathagata Das] Refactored streaming kinesis guide and made
tathagata.das1...@gmail.com
Author: Jacek Laskowski ja...@japila.pl
Closes #2254 from tdas/streaming-doc-fix and squashes the following commits:
e45c6d7 [Jacek Laskowski] More fixes from an old PR
5125316 [Tathagata Das] Fixed links
dc02f26 [Tathagata Das] Refactored streaming kinesis guide and made many
: Tathagata Das tathagata.das1...@gmail.com
Author: Chris Fregly ch...@fregly.com
Closes #2307 from tdas/streaming-doc-fix1 and squashes the following commits:
ec40b5d [Tathagata Das] Updated figure with kinesis
fdb9c5e [Tathagata Das] Fixed style issues with kinesis guide
036d219 [Chris Fregly] updated
: Tathagata Das tathagata.das1...@gmail.com
Author: Chris Fregly ch...@fregly.com
Closes #2307 from tdas/streaming-doc-fix1 and squashes the following commits:
ec40b5d [Tathagata Das] Updated figure with kinesis
fdb9c5e [Tathagata Das] Fixed style issues with kinesis guide
036d219 [Chris Fregly
Repository: spark
Updated Branches:
refs/heads/master b8487713d - c3f2a8588
SPARK-2932 [STREAMING] Move MasterFailureTest out of main source directory
(HT @vanzin) Whatever the reason was for having this test class in `main`, if
there is one, appear to be moot. This may have been a result of
double check
58591d2 [Ken Takagiwa] reduceByKey is working
0df7111 [Ken Takagiwa] delete old file
f485b1d [Ken Takagiwa] fied input of socketTextDStream
dd6de81 [Ken Takagiwa] initial commit for socketTextStream
247fd74 [Ken Takagiwa] modified the code base on comment in
https://github.com/tdas
batches.
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #2773 from tdas/flume-test-fix and squashes the following commits:
93cd7f6 [Tathagata Das] Reimplimented FlumeStreamSuite to be more robust.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip
HDFS mini cluster.
Author: Hari Shreedharan hshreedha...@apache.org
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #2882 from tdas/driver-ha-wal and squashes the following commits:
e4bee20 [Tathagata Das] Removed synchronized, Path.getFileSystem is threadsafe
55514e2 [Tathagata Das
as a write ahead log.
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #2940 from tdas/driver-ha-rbh and squashes the following commits:
78a4aaa [Tathagata Das] Fixed bug causing test failures.
f192f47 [Tathagata Das] Fixed import order.
df5f320 [Tathagata Das] Updated code to use
hshreedha...@apache.org
Closes #2931 from tdas/driver-ha-rdd and squashes the following commits:
209e49c [Tathagata Das] Better fix to style issue.
4a5866f [Tathagata Das] Addressed one more comment.
ed5fbf0 [Tathagata Das] Minor updates.
b0a18b1 [Tathagata Das] Fixed import order.
20aa7c6 [Tathagata
that tests the driver recovery, by
killing and restarting the streaming context, and verifying all the input data
gets processed. This has been implemented but not included in this PR yet. A
sneak peek of that DriverFailureSuite can be found in this PR (on my personal
repo): https://github.com/tdas
Repository: spark
Updated Branches:
refs/heads/master d6e555244 - 7c9ec529a
Update JavaCustomReceiver.java
æ°ç»ä¸æ è¶ç
Author: xiao321 1042460...@qq.com
Closes #3153 from xiao321/patch-1 and squashes the following commits:
0ed17b5 [xiao321] Update JavaCustomReceiver.java
Project:
Repository: spark
Updated Branches:
refs/heads/branch-1.2 47bd8f302 - 8cefb63c1
Update JavaCustomReceiver.java
æ°ç»ä¸æ è¶ç
Author: xiao321 1042460...@qq.com
Closes #3153 from xiao321/patch-1 and squashes the following commits:
0ed17b5 [xiao321] Update JavaCustomReceiver.java
Repository: spark
Updated Branches:
refs/heads/branch-1.1 0a40eac25 - 4fb26df87
Update JavaCustomReceiver.java
æ°ç»ä¸æ è¶ç
Author: xiao321 1042460...@qq.com
Closes #3153 from xiao321/patch-1 and squashes the following commits:
0ed17b5 [xiao321] Update JavaCustomReceiver.java
Repository: spark
Updated Branches:
refs/heads/branch-1.0 76c20cac9 - 18c8c3833
Update JavaCustomReceiver.java
æ°ç»ä¸æ è¶ç
Author: xiao321 1042460...@qq.com
Closes #3153 from xiao321/patch-1 and squashes the following commits:
0ed17b5 [xiao321] Update JavaCustomReceiver.java
Repository: spark
Updated Branches:
refs/heads/master 4af5c7e24 - 7b41b17f3
[SPARK-4301] StreamingContext should not allow start() to be called after
calling stop()
In Spark 1.0.0+, calling `stop()` on a StreamingContext that has not been
started is a no-op which has no side-effects. This
Repository: spark
Updated Branches:
refs/heads/branch-1.2 05bffcc02 - 21b9ac062
[SPARK-4301] StreamingContext should not allow start() to be called after
calling stop()
In Spark 1.0.0+, calling `stop()` on a StreamingContext that has not been
started is a no-op which has no side-effects.
Repository: spark
Updated Branches:
refs/heads/branch-1.1 4895f6544 - 78cd3ab88
[SPARK-4301] StreamingContext should not allow start() to be called after
calling stop()
In Spark 1.0.0+, calling `stop()` on a StreamingContext that has not been
started is a no-op which has no side-effects.
Repository: spark
Updated Branches:
refs/heads/branch-1.0 d4aed266d - 395656c8e
[SPARK-4301] StreamingContext should not allow start() to be called after
calling stop()
In Spark 1.0.0+, calling `stop()` on a StreamingContext that has not been
started is a no-op which has no side-effects.
Repository: spark
Updated Branches:
refs/heads/master ed8bf1eac - 3a02d416c
SPARK-2548 [STREAMING] JavaRecoverableWordCount is missing
Here's my attempt to re-port `RecoverableNetworkWordCount` to Java, following
the example of its Scala and Java siblings. I fixed a few minor doc/formatting
Repository: spark
Updated Branches:
refs/heads/branch-1.1 dc38defd2 - cdcf5467a
SPARK-2548 [STREAMING] JavaRecoverableWordCount is missing
Here's my attempt to re-port `RecoverableNetworkWordCount` to Java, following
the example of its Scala and Java siblings. I fixed a few minor
Repository: spark
Updated Branches:
refs/heads/master 3a02d416c - 0340c56a9
Update RecoverableNetworkWordCount.scala
Trying this example, I missed the moment when the checkpoint was iniciated
Author: comcmipi pito...@fns.uniba.sk
Closes #2735 from comcmipi/patch-1 and squashes the following
Repository: spark
Updated Branches:
refs/heads/branch-1.1 cdcf5467a - 254b13570
Update RecoverableNetworkWordCount.scala
Trying this example, I missed the moment when the checkpoint was iniciated
Author: comcmipi pito...@fns.uniba.sk
Closes #2735 from comcmipi/patch-1 and squashes the
Repository: spark
Updated Branches:
refs/heads/branch-1.2 f0eb0a79c - 07ba50f7e
[SPARK-3954][Streaming] Optimization to FileInputDStream
about convert files to RDDS there are 3 loops with files sequence in spark
source.
loops files sequence:
1.files.map(...)
2.files.zip(fileRDDs)
Repository: spark
Updated Branches:
refs/heads/master a1fc059b6 - ce6ed2abd
[SPARK-3954][Streaming] Optimization to FileInputDStream
about convert files to RDDS there are 3 loops with files sequence in spark
source.
loops files sequence:
1.files.map(...)
2.files.zip(fileRDDs)
Repository: spark
Updated Branches:
refs/heads/branch-1.1 64945f868 - 3d889dfc1
[SPARK-3954][Streaming] Optimization to FileInputDStream
about convert files to RDDS there are 3 loops with files sequence in spark
source.
loops files sequence:
1.files.map(...)
2.files.zip(fileRDDs)
of NioBlockTransferService, which required slight modification to unit tests.
Other than that the code is exactly same as in the original PR. Please refer to
discussion in the original PR if you have any thoughts.
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #3191 from tdas/replication-fix-branch
Repository: spark
Updated Branches:
refs/heads/master ef29a9a9a - f8811a569
[SPARK-4295][External]Fix exception in SparkSinkSuite
Handle exception in SparkSinkSuite, please refer to [SPARK-4295]
Author: maji2014 ma...@asiainfo.com
Closes #3177 from maji2014/spark-4295 and squashes the
will introduce issue as mentioned in
[SPARK-2383](https://issues.apache.org/jira/browse/SPARK-2383).
So Here we change to offer user to API to explicitly reset offset before create
Kafka stream, while in the meantime keep the same behavior as Kafka 0.8 for
parameter `auto.offset.reset`.
@tdas
Repository: spark
Updated Branches:
refs/heads/master c8850a3d6 - 6e03de304
[Streaming][Minor]Replace some 'if-else' in Clock
Replace some 'if-else' statement by math.min and math.max in Clock.scala
Author: huangzhaowei carlmartin...@gmail.com
Closes #3088 from SaintBacchus/StreamingClock
Repository: spark
Updated Branches:
refs/heads/master 4b736dbab - 36ddeb7bf
[SPARK-3660][STREAMING] Initial RDD for updateStateByKey transformation
SPARK-3660 : Initial RDD for updateStateByKey transformation
I have added a sample StatefulNetworkWordCountWithInitial inspired by
:
5461f1c [Saisai Shao] Merge pull request #8 from tdas/kafka-refactor3
eae4ad6 [Tathagata Das] Refectored KafkaStreamSuiteBased to eliminate
KafkaTestUtils and made Java more robust.
fab14c7 [Tathagata Das] minor update.
149948b [Tathagata Das] Fixed mistake
14630aa [Tathagata Das] Minor updates.
d9a452c
[Saisai Shao] Merge pull request #8 from tdas/kafka-refactor3
eae4ad6 [Tathagata Das] Refectored KafkaStreamSuiteBased to eliminate
KafkaTestUtils and made Java more robust.
fab14c7 [Tathagata Das] minor update.
149948b [Tathagata Das] Fixed mistake
14630aa [Tathagata Das] Minor updates.
d9a452c
should be enabled only if
the WAL is enabled in the Spark configuration.
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #3358 from tdas/SPARK-4482 and squashes the following commits:
b740136 [Tathagata Das] Fixed bug in ReceivedBlockTracker
Project: http://git-wip-us.apache.org/repos
, as the WAL should be enabled only if
the WAL is enabled in the Spark configuration.
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #3358 from tdas/SPARK-4482 and squashes the following commits:
b740136 [Tathagata Das] Fixed bug in ReceivedBlockTracker
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master 22fc4e751 - 3bf7ceebb
[SPARK-4481][Streaming][Doc] Fix the wrong description of updateFunc
Removed `If `this` function returns None, then corresponding state key-value
pair will be eliminated.` for the description of `updateFunc:
Repository: spark
Updated Branches:
refs/heads/master 1c938413b - 9b7bbcef8
[DOC][PySpark][Streaming] Fix docstring for sphinx
This commit should be merged for 1.2 release.
cc tdas
Author: Ken Takagiwa ugw.gi.wo...@gmail.com
Closes #3311 from giwa/patch-3 and squashes the following commits
Repository: spark
Updated Branches:
refs/heads/branch-1.2 8ecabf4b7 - c4abb2eb4
[DOC][PySpark][Streaming] Fix docstring for sphinx
This commit should be merged for 1.2 release.
cc tdas
Author: Ken Takagiwa ugw.gi.wo...@gmail.com
Closes #3311 from giwa/patch-3 and squashes the following
Repository: spark
Updated Branches:
refs/heads/branch-1.2 c4abb2eb4 - a250ca369
[SPARK-4294][Streaming] UnionDStream stream should express the requirements in
the same way as TransformedDStream
In class TransformedDStream:
```scala
require(parents.length 0, List of DStreams to transform is
Repository: spark
Updated Branches:
refs/heads/master 73c8ea84a - c3002c4a6
[SPARK-4294][Streaming] UnionDStream stream should express the requirements in
the same way as TransformedDStream
In class TransformedDStream:
```scala
require(parents.length 0, List of DStreams to transform is
Repository: spark
Updated Branches:
refs/heads/branch-1.2 e958132a8 - b676d9ad3
[SPARK-4481][Streaming][Doc] Fix the wrong description of updateFunc (backport
for branch-1.2)
backport for branch-1.2 as per #3356
Author: zsxwing zsxw...@gmail.com
Closes #3376 from
Repository: spark
Updated Branches:
refs/heads/filestream-fix1 [deleted] 6b8d85b2b
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
integration tests,
so I cannot say for sure whether this has indeed solved the issue. You could do
a first pass on this in the meantime.
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #3419 from tdas/filestream-fix2 and squashes the following commits:
c19dd8a [Tathagata Das] Addressed PR
Repository: spark
Updated Branches:
refs/heads/master f515f9432 - a51118a34
[SPARK-4535][Streaming] Fix the error in comments
change `NetworkInputDStream` to `ReceiverInputDStream`
change `ReceiverInputTracker` to `ReceiverTracker`
Author: q00251598 qiyad...@huawei.com
Closes #3400 from
Repository: spark
Updated Branches:
refs/heads/branch-1.2 d117f8fa4 - 42b9d0d31
[SPARK-4535][Streaming] Fix the error in comments
change `NetworkInputDStream` to `ReceiverInputDStream`
change `ReceiverInputTracker` to `ReceiverTracker`
Author: q00251598 qiyad...@huawei.com
Closes #3400 from
...@gmail.com
Closes #3455 from tdas/streaming-callsite-fix and squashes the following
commits:
69fc26f [Tathagata Das] Set correct call site for streaming jobs so that it is
displayed correctly on the Spark UI
(cherry picked from commit 69cd53eae205eb10d52eaf38466db58a23b6ae81)
Signed-off
for
checkpoints
- Makes the default configuration object used saveAsNewAPIHadoopFiles be the
Spark's hadoop configuration
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #3457 from tdas/savefiles-fix and squashes the following commits:
bb4729a [Tathagata Das] Same treatment
for
checkpoints
- Makes the default configuration object used saveAsNewAPIHadoopFiles be the
Spark's hadoop configuration
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #3457 from tdas/savefiles-fix and squashes the following commits:
bb4729a [Tathagata Das] Same treatment
for
checkpoints
- Makes the default configuration object used saveAsNewAPIHadoopFiles be the
Spark's hadoop configuration
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #3457 from tdas/savefiles-fix and squashes the following commits:
bb4729a [Tathagata Das] Same treatment
Repository: spark
Updated Branches:
refs/heads/branch-1.2 a9944c809 - a2c01ae5e
[HOTFIX] Fixing broken build due to missing imports.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a2c01ae5
Tree:
Repository: spark
Updated Branches:
refs/heads/branch-1.1 7aa592c74 - 1a7f4144e
[HOTFIX] Fixing broken build due to missing imports.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1a7f4144
Tree:
Repository: spark
Updated Branches:
refs/heads/branch-1.2 6a46cc3c8 - 01adf45a9
[SPARK-4802] [streaming] Remove receiverInfo once receiver is de-registered
Once the streaming receiver is de-registered at executor, the
`ReceiverTrackerActor` needs to
remove the corresponding reveiverInfo
Repository: spark
Updated Branches:
refs/heads/master 96281cd0c - 10d69e9cb
[SPARK-4802] [streaming] Remove receiverInfo once receiver is de-registered
Once the streaming receiver is de-registered at executor, the
`ReceiverTrackerActor` needs to
remove the corresponding reveiverInfo from
Repository: spark
Updated Branches:
refs/heads/branch-1.1 3bce43f67 - b1de461a7
[SPARK-4802] [streaming] Remove receiverInfo once receiver is de-registered
Once the streaming receiver is de-registered at executor, the
`ReceiverTrackerActor` needs to
remove the corresponding reveiverInfo
is needless and will
hurt the throughput of streaming application.
Hi tdas , as per discussed about this issue, I fixed with this implementation,
I'm not is this the way you want, would you mind taking a look at it? Thanks a
lot.
Author: jerryshao saisai.s...@intel.com
Closes #3534 from jerryshao
is needless and will
hurt the throughput of streaming application.
Hi tdas , as per discussed about this issue, I fixed with this implementation,
I'm not is this the way you want, would you mind taking a look at it? Thanks a
lot.
Author: jerryshao saisai.s...@intel.com
Closes #3534 from jerryshao
Repository: spark
Updated Branches:
refs/heads/master 29fabb1b5 - b4d0db80a
[SPARK-4873][Streaming] Use `Future.zip` instead of `Future.flatMap`(for-loop)
in WriteAheadLogBasedBlockHandler
Use `Future.zip` instead of `Future.flatMap`(for-loop). `zip` implies these two
Futures will run
Repository: spark
Updated Branches:
refs/heads/branch-1.2 1a4e2ba73 - 17d6f547b
[SPARK-4873][Streaming] Use `Future.zip` instead of `Future.flatMap`(for-loop)
in WriteAheadLogBasedBlockHandler
Use `Future.zip` instead of `Future.flatMap`(for-loop). `zip` implies these two
Futures will run
Repository: spark
Updated Branches:
refs/heads/master ac8278593 - f205fe477
[SPARK-4537][Streaming] Expand StreamingSource to add more metrics
Add `processingDelay`, `schedulingDelay` and `totalDelay` for the last
completed batch. Add `lastReceivedBatchRecords` and
Repository: spark
Updated Branches:
refs/heads/branch-1.2 475ab6ec7 - acf5c6328
[SPARK-4537][Streaming] Expand StreamingSource to add more metrics
Add `processingDelay`, `schedulingDelay` and `totalDelay` for the last
completed batch. Add `lastReceivedBatchRecords` and
Repository: spark
Updated Branches:
refs/heads/branch-1.1 dd0287cca - d21347dbb
[SPARK-4537][Streaming] Expand StreamingSource to add more metrics
Add `processingDelay`, `schedulingDelay` and `totalDelay` for the last
completed batch. Add `lastReceivedBatchRecords` and
Repository: spark
Updated Branches:
refs/heads/branch-1.2 7a245412f - edc96d81d
[SPARK-4813][Streaming] Fix the issue that ContextWaiter didn't handle
'spurious wakeup'
Used `Condition` to rewrite `ContextWaiter` because it provides a convenient
API `awaitNanos` for timeout.
Author:
Repository: spark
Updated Branches:
refs/heads/master 0f31992c6 - 6a8978294
[SPARK-4813][Streaming] Fix the issue that ContextWaiter didn't handle
'spurious wakeup'
Used `Condition` to rewrite `ContextWaiter` because it provides a convenient
API `awaitNanos` for timeout.
Author: zsxwing
Repository: spark
Updated Branches:
refs/heads/branch-1.0 78157d494 - f47e162b9
[SPARK-4813][Streaming] Fix the issue that ContextWaiter didn't handle
'spurious wakeup'
Used `Condition` to rewrite `ContextWaiter` because it provides a convenient
API `awaitNanos` for timeout.
Author:
Repository: spark
Updated Branches:
refs/heads/branch-1.1 d6b8d2c03 - eac740e9a
[SPARK-4813][Streaming] Fix the issue that ContextWaiter didn't handle
'spurious wakeup'
Used `Condition` to rewrite `ContextWaiter` because it provides a convenient
API `awaitNanos` for timeout.
Author:
Repository: spark
Updated Branches:
refs/heads/master c88a3d7fc - 3610d3c61
[SPARK-4790][STREAMING] Fix ReceivedBlockTrackerSuite waits for old file...
...s to get deleted before continuing.
Since the deletes are happening asynchronously, the getFileStatus call might
throw an exception in
Repository: spark
Updated Branches:
refs/heads/branch-1.2 076de46f2 - bd70ff99e
[SPARK-4790][STREAMING] Fix ReceivedBlockTrackerSuite waits for old file...
...s to get deleted before continuing.
Since the deletes are happening asynchronously, the getFileStatus call might
throw an exception
Repository: spark
Updated Branches:
refs/heads/master 3610d3c61 - fdc2aa491
[SPARK-5028][Streaming]Add total received and processed records metrics to
Streaming UI
This is a follow-up work of
[SPARK-4537](https://issues.apache.org/jira/browse/SPARK-4537). Adding total
received records and
Repository: spark
Updated Branches:
refs/heads/branch-1.2 bd70ff99e - 14dbd8312
[SPARK-5028][Streaming]Add total received and processed records metrics to
Streaming UI
This is a follow-up work of
[SPARK-4537](https://issues.apache.org/jira/browse/SPARK-4537). Adding total
received records
Repository: spark
Updated Branches:
refs/heads/master c4f0b4f33 - fe6efacc0
[SPARK-5035] [Streaming] ReceiverMessage trait should extend Serializable
Spark Streaming's ReceiverMessage trait should extend Serializable in order to
fix a subtle bug that only occurs when running on a real
Repository: spark
Updated Branches:
refs/heads/branch-1.2 14dbd8312 - 434ea009c
[SPARK-5035] [Streaming] ReceiverMessage trait should extend Serializable
Spark Streaming's ReceiverMessage trait should extend Serializable in order to
fix a subtle bug that only occurs when running on a real
Repository: spark
Updated Branches:
refs/heads/branch-1.1 1034707c7 - 61eb9be4b
[SPARK-5035] [Streaming] ReceiverMessage trait should extend Serializable
Spark Streaming's ReceiverMessage trait should extend Serializable in order to
fix a subtle bug that only occurs when running on a real
Repository: spark
Updated Branches:
refs/heads/branch-1.0 64cd91dca - 5cf94775e
[SPARK-5035] [Streaming] ReceiverMessage trait should extend Serializable
Spark Streaming's ReceiverMessage trait should extend Serializable in order to
fix a subtle bug that only occurs when running on a real
Repository: spark
Updated Branches:
refs/heads/master fe6efacc0 - 4bb12488d
SPARK-2757 [BUILD] [STREAMING] Add Mima test for Spark Sink after 1.10 is
released
Re-enable MiMa for Streaming Flume Sink module, now that 1.1.0 is released, per
the JIRA TO-DO. That's pretty much all there is to
in `DStream.print`.
Author: Yadong Qi qiyadong2...@gmail.com
Author: q00251598 qiyad...@huawei.com
Author: Tathagata Das tathagata.das1...@gmail.com
Author: wangfei wangf...@huawei.com
Closes #3865 from tdas/print-num and squashes the following commits:
cd34e9e [Tathagata Das] Fix bug
7c09f16
Repository: spark
Updated Branches:
refs/heads/master bd88b7185 - cdccc263b
Fixed typos in streaming-kafka-integration.md
Changed projrect to project :)
Author: Akhil Das ak...@darktech.ca
Closes #3876 from akhld/patch-1 and squashes the following commits:
e0cf9ef [Akhil Das] Fixed typos
Repository: spark
Updated Branches:
refs/heads/branch-1.2 da9a4b932 - 33f0b14ba
Fixed typos in streaming-kafka-integration.md
Changed projrect to project :)
Author: Akhil Das ak...@darktech.ca
Closes #3876 from akhld/patch-1 and squashes the following commits:
e0cf9ef [Akhil Das] Fixed
PySpark Streaming
- new unit tests in Scala and Python
This required adding an optional Hadoop configuration param to `fileStream` and
`FileInputStream`, but was otherwise straightforward.
tdas davies
Author: freeman the.freeman@gmail.com
Closes #3803 from freeman-lab/streaming-binary-records
Streaming
- new unit tests in Scala and Python
This required adding an optional Hadoop configuration param to `fileStream` and
`FileInputStream`, but was otherwise straightforward.
tdas davies
Author: freeman the.freeman@gmail.com
Closes #3803 from freeman-lab/streaming-binary-records
Repository: spark
Updated Branches:
refs/heads/master 6aed719e5 - 4cf4cba08
[SPARK-5379][Streaming] Add awaitTerminationOrTimeout
Added `awaitTerminationOrTimeout` to return if the waiting time elapsed:
* `true` if it's stopped.
* `false` if the waiting time elapsed before returning from the
Repository: spark
Updated Branches:
refs/heads/master 4cf4cba08 - a74cbbf12
[Minor] Fix incorrect warning log
The warning log looks incorrect. Just fix it.
Author: Liang-Chi Hsieh vii...@gmail.com
Closes #4360 from viirya/fixing_typo and squashes the following commits:
48fbe4f [Liang-Chi
Repository: spark
Updated Branches:
refs/heads/branch-1.3 4d3dbfda3 - 316a4bb54
[Minor] Fix incorrect warning log
The warning log looks incorrect. Just fix it.
Author: Liang-Chi Hsieh vii...@gmail.com
Closes #4360 from viirya/fixing_typo and squashes the following commits:
48fbe4f
koeninger] [SPARK-4964] fix serialization issues for checkpointing
1d50749 [cody koeninger] [SPARK-4964] code cleanup per tdas
8bfd6c0 [cody koeninger] [SPARK-4964] configure rate limiting via
spark.streaming.receiver.maxRate
e09045b [cody koeninger] [SPARK-4964] add foreachPartitionWithIndex
Repository: spark
Updated Branches:
refs/heads/master b0c002195 - f0500f9fa
[SPARK-4707][STREAMING] Reliable Kafka Receiver can lose data if the blo...
...ck generator fails to store data.
The Reliable Kafka Receiver commits offsets only when events are actually
stored, which ensures that
Repository: spark
Updated Branches:
refs/heads/branch-1.3 a119cae48 - 14c9f32d8
[SPARK-4707][STREAMING] Reliable Kafka Receiver can lose data if the blo...
...ck generator fails to store data.
The Reliable Kafka Receiver commits offsets only when events are actually
stored, which ensures
Repository: spark
Updated Branches:
refs/heads/master 681f9df47 - 1e8b5394b
[STREAMING] SPARK-4986 Wait for receivers to deregister and receiver job to
terminate
A slow receiver might not have enough time to shutdown cleanly even when
graceful shutdown is used. This PR extends graceful
Repository: spark
Updated Branches:
refs/heads/branch-1.3 d644bd96a - 092d4ba57
[STREAMING] SPARK-4986 Wait for receivers to deregister and receiver job to
terminate
A slow receiver might not have enough time to shutdown cleanly even when
graceful shutdown is used. This PR extends graceful
Repository: spark
Updated Branches:
refs/heads/branch-1.2 36c299430 - 62c758753
[STREAMING] SPARK-4986 Wait for receivers to deregister and receiver job to
terminate
A slow receiver might not have enough time to shutdown cleanly even when
graceful shutdown is used. This PR extends graceful
33730d1 [Davies Liu] Merge branch 'master' of github.com:apache/spark into kafka
adeeb38 [Davies Liu] Merge pull request #3 from tdas/kafka-python-api
aea8953 [Tathagata Das] Kafka-assembly for Python API
eea16a7 [Davies Liu] refactor
f6ce899 [Davies Liu] add example and fix bugs
98c8d17 [Davies Liu] fix
101 - 200 of 847 matches
Mail list logo