Github user srowen commented on the issue:

    https://github.com/apache/spark/pull/19183
  
    I thought this check also existed in the non-streaming code; the theory was 
that if you have set a fixed number of executors but enabled dynamic 
allocation, then that's probably a configuration error. But given that many 
people run on clusters with dynamic allocation defaulting to 'on' globally, 
that could be confusing or a little inconvenient to work around.
    
    I don't think that check exists in the non-streaming code anymore though, 
and I see a test to that effect too. Therefore I think this is reasonable for 
consistency.
    
    CC @tdas


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to