Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2964#discussion_r19707122
  
    --- Diff: docs/streaming-programming-guide.md ---
    @@ -586,11 +588,13 @@ Every input DStream (except file stream) is 
associated with a single [Receiver](
     
     A receiver is run within a Spark worker/executor as a long-running task, 
hence it occupies one of the cores allocated to the Spark Streaming 
application. Hence, it is important to remember that Spark Streaming 
application needs to be allocated enough cores to process the received data, as 
well as, to run the receiver(s). Therefore, few important points to remember 
are:
     
    -##### Points to remember:
    +##### Points to remember
     {:.no_toc}
     - If the number of cores allocated to the application is less than or 
equal to the number of input DStreams / receivers, then the system will receive 
data, but not be able to process them.
    -- When running locally, if you master URL is set to "local", then there is 
only one core to run tasks.  That is insufficient for programs with even one 
input DStream (file streams are okay) as the receiver will occupy that core and 
there will be no core left to process the data.
    -
    +- When running locally, if you master URL is set to "local", then there is 
only one core to run tasks.  That is insufficient for programs using a DStream 
as the receiver (file streams are okay).  So, a "local" master URL in a 
streaming app is generally going to cause starvation for the processor.  
    --- End diff --
    
    Probably want to replace core -> thread here too since technically you can 
run local[2] even on a 1-core machine


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to