[
https://issues.apache.org/jira/browse/STORM-898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15062844#comment-15062844
]
ASF GitHub Bot commented on STORM-898:
--------------------------------------
Github user d2r commented on a diff in the pull request:
https://github.com/apache/storm/pull/921#discussion_r47962606
--- Diff:
storm-core/src/jvm/backtype/storm/scheduler/resource/ResourceUtils.java ---
@@ -130,4 +137,57 @@ private static void debugMessage(String memoryType,
String Com, Map topologyConf
Com,
topologyConf.get(Config.TOPOLOGY_COMPONENT_CPU_PCORE_PERCENT));
}
}
+
+ /**
+ * print scheduling for debug purposes
+ * @param cluster
+ * @param topologies
+ */
+ public static String printScheduling(Cluster cluster, Topologies
topologies) {
+ StringBuilder str = new StringBuilder();
+ Map<String, Map<String, Map<WorkerSlot,
Collection<ExecutorDetails>>>> schedulingMap = new HashMap<String, Map<String,
Map<WorkerSlot, Collection<ExecutorDetails>>>>();
+ for (TopologyDetails topo : topologies.getTopologies()) {
+ if (cluster.getAssignmentById(topo.getId()) != null) {
+ for (Map.Entry<ExecutorDetails, WorkerSlot> entry :
cluster.getAssignmentById(topo.getId()).getExecutorToSlot().entrySet()) {
+ WorkerSlot slot = entry.getValue();
+ String nodeId = slot.getNodeId();
+ ExecutorDetails exec = entry.getKey();
+ if (schedulingMap.containsKey(nodeId) == false) {
+ schedulingMap.put(nodeId, new HashMap<String,
Map<WorkerSlot, Collection<ExecutorDetails>>>());
+ }
+ if
(schedulingMap.get(nodeId).containsKey(topo.getId()) == false) {
+ schedulingMap.get(nodeId).put(topo.getId(), new
HashMap<WorkerSlot, Collection<ExecutorDetails>>());
+ }
+ if
(schedulingMap.get(nodeId).get(topo.getId()).containsKey(slot) == false) {
+
schedulingMap.get(nodeId).get(topo.getId()).put(slot, new
LinkedList<ExecutorDetails>());
+ }
+
schedulingMap.get(nodeId).get(topo.getId()).get(slot).add(exec);
+ }
+ }
+ }
+
+ for (Map.Entry<String, Map<String, Map<WorkerSlot,
Collection<ExecutorDetails>>>> entry : schedulingMap.entrySet()) {
+ if (cluster.getSupervisorById(entry.getKey()) != null) {
+ str.append("/** Node: " +
cluster.getSupervisorById(entry.getKey()).getHost() + "-" + entry.getKey() + "
**/\n");
+ } else {
+ str.append("/** Node: Unknown may be dead -" +
entry.getKey() + " **/\n");
+ }
+ for (Map.Entry<String, Map<WorkerSlot,
Collection<ExecutorDetails>>> topo_sched :
schedulingMap.get(entry.getKey()).entrySet()) {
+ str.append("\t-->Topology: " + topo_sched.getKey() + "\n");
+ for (Map.Entry<WorkerSlot, Collection<ExecutorDetails>> ws
: topo_sched.getValue().entrySet()) {
+ str.append("\t\t->Slot [" + ws.getKey().getPort() + "]
-> " + ws.getValue() + "\n");
+ }
+ }
+ }
+ return str.toString();
+ }
+
+ public static String printScheduling(RAS_Nodes nodes) {
+ String ret="";
+ for (RAS_Node node : nodes.getNodes()) {
+ ret += "Node: " + node.getHostname() + "\n";
+ ret += "-> " + node.getTopoIdTousedSlots() + "\n";
+ }
+ return ret;
+ }
--- End diff --
This is not used. Should we keep it?
> Add priorities and per user resource guarantees to Resource Aware Scheduler
> ---------------------------------------------------------------------------
>
> Key: STORM-898
> URL: https://issues.apache.org/jira/browse/STORM-898
> Project: Apache Storm
> Issue Type: New Feature
> Components: storm-core
> Reporter: Robert Joseph Evans
> Assignee: Boyang Jerry Peng
> Attachments: Resource Aware Scheduler for Storm.pdf
>
>
> In a multi-tenant environment we would like to be able to give individual
> users a guarantee of how much CPU/Memory/Network they will be able to use in
> a cluster. We would also like to know which topologies a user feels are the
> most important to keep running if there are not enough resources to run all
> of their topologies.
> Each user should be able to specify if their topology is production, staging,
> or development. Within each of those categories a user should be able to give
> a topology a priority, 0 to 10 with 10 being the highest priority (or
> something like this).
> If there are not enough resources on a cluster to run a topology assume this
> topology is running using resources and find the user that is most over their
> guaranteed resources. Shoot the lowest priority topology for that user, and
> repeat until, this topology is able to run, or this topology would be the one
> shot. Ideally we don't actually shoot anything until we know that we would
> have made enough room.
> If the cluster is over-subscribed and everyone is under their guarantee, and
> this topology would not put the user over their guarantee. Shoot the lowest
> priority topology in this workers resource pool until there is enough room to
> run the topology or this topology is the one that would be shot. We might
> also want to think about what to do if we are going to shoot a production
> topology in an oversubscribed case, and perhaps we can shoot a non-production
> topology instead even if the other user is not over their guarantee.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)