Github user mccheah commented on the issue:

    https://github.com/apache/spark/pull/22608
  
    It depends on how we're getting the Hadoop images. If we're building 
everything from scratch, we could run everything in one container - though 
having a container run more than one process simultaneously isn't common. It's 
more common to have a single container have a single responsibility / process. 
But you can group multiple containers that have related responsibilities into a 
single pod, hence we'll use 3 containers in one pod here.
    
    If we're pulling Hadoop images from elsewhere - which it sounds like we 
aren't doing in the Apache ecosystem in general though - then we'd need to 
build our own separate image for the KDC anyways.
    
    Multiple containers in the same pod all share the same resource footprint 
and limit boundaries.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to