[
https://issues.apache.org/jira/browse/SPARK-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Iulian Dragos updated SPARK-7398:
---------------------------------
Description:
Spark Streaming has trouble dealing with situations where
batch processing time > batch interval
Meaning a high throughput of input data w.r.t. Spark's ability to remove data
from the queue.
If this throughput is sustained for long enough, it leads to an unstable
situation where the memory of the Receiver's Executor is overflowed.
This aims at transmitting a back-pressure signal back to data ingestion to help
with dealing with that high throughput, in a backwards-compatible way.
The original design doc can be found here:
https://docs.google.com/document/d/1ZhiP_yBHcbjifz8nJEyPJpHqxB1FT6s8-Zk7sAfayQw/edit?usp=sharing
The second design doc (without all the background info, and more centered on
the implementation) can be found here:
https://docs.google.com/document/d/1ls_g5fFmfbbSTIfQQpUxH56d0f3OksF567zwA00zK9E/edit?usp=sharing
was:
Spark Streaming has trouble dealing with situations where
batch processing time > batch interval
Meaning a high throughput of input data w.r.t. Spark's ability to remove data
from the queue.
If this throughput is sustained for long enough, it leads to an unstable
situation where the memory of the Receiver's Executor is overflowed.
This aims at transmitting a back-pressure signal back to data ingestion to help
with dealing with that high throughput, in a backwards-compatible way.
The design doc can be found here:
https://docs.google.com/document/d/1ZhiP_yBHcbjifz8nJEyPJpHqxB1FT6s8-Zk7sAfayQw/edit?usp=sharing
> Add back-pressure to Spark Streaming
> ------------------------------------
>
> Key: SPARK-7398
> URL: https://issues.apache.org/jira/browse/SPARK-7398
> Project: Spark
> Issue Type: Improvement
> Components: Streaming
> Affects Versions: 1.3.1
> Reporter: François Garillot
> Priority: Critical
> Labels: streams
>
> Spark Streaming has trouble dealing with situations where
> batch processing time > batch interval
> Meaning a high throughput of input data w.r.t. Spark's ability to remove data
> from the queue.
> If this throughput is sustained for long enough, it leads to an unstable
> situation where the memory of the Receiver's Executor is overflowed.
> This aims at transmitting a back-pressure signal back to data ingestion to
> help with dealing with that high throughput, in a backwards-compatible way.
> The original design doc can be found here:
> https://docs.google.com/document/d/1ZhiP_yBHcbjifz8nJEyPJpHqxB1FT6s8-Zk7sAfayQw/edit?usp=sharing
> The second design doc (without all the background info, and more centered on
> the implementation) can be found here:
> https://docs.google.com/document/d/1ls_g5fFmfbbSTIfQQpUxH56d0f3OksF567zwA00zK9E/edit?usp=sharing
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]