Github user kwmonroe commented on a diff in the pull request:
https://github.com/apache/bigtop/pull/197#discussion_r112987956
--- Diff: bigtop-deploy/juju/hadoop-spark/bundle.yaml ---
@@ -29,21 +29,21 @@ services:
- "2"
- "3"
plugin:
- charm: "cs:xenial/hadoop-plugin-13"
+ charm: "cs:xenial/hadoop-plugin-14"
annotations:
gui-x: "1000"
gui-y: "400"
client:
charm: "cs:xenial/hadoop-client-3"
- constraints: "mem=3G"
+ constraints: "mem=7G root-disk=32G"
--- End diff --
@johnsca, this is correct - in this rev of the bundle, spark is colocated
with client. I did this because in yarn-client mode, spark doesn't need a
whole new unit by itself. The driver process can tax the unit a bit more, so I
bumped ram to 7g. This also benefits users because they can run `spark-submit`
and `hadoop jar` on the same unit. Previously, they'd have to run spark jobs
on the spark unit and hadoop jobs on the client unit.
Yes, app-level constraints are required in addition to machine constraints.
Without them, the initial deployment would use the machine constraints, but a
subsequent `add-unit` would fall back to the provider default. Ideally, we
could remove the machine constraints and solely rely on app-constraints. We'll
do that once https://bugs.launchpad.net/juju/+bug/1676986 is fixed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---