Phil Henshaw wrote: > I'm trying to compare the use central managed > solutions and user negotiated solutions in this fairly simple problem to > develop a way of discussing the more complicated situations where efficient > and fair central resource management is not possible. For lots of things > central control is going to work well and be naturally more efficient. > The "user negotiated solutions" reduce to the question of a shared values. When shared values can be identified amongst N people, then conceptually we can replace those N people with a single person that plans a larger array of work and then again it's a question of scheduling, load balancing, and optimization. Normally, though, people know what they want and are competing to get as much of it as possible.
When there are no shared values, than all that can be done is to dynamically divide the resource into a virtual resource and use the strengths of the resource to make up for the weaknesses in the resource. It's a design question, whose solution may be central or distributed in nature but still algorithmic. Virtualization can prevent hogging, although the resources will divide in power as more and more users draw upon it. In contrast, a political solution requires trust (or at least policing). Without trust, there will be instabilities created when people pretend to have consensus in order to get preferred access and then soon defect on one another when it actually comes time to use the thing. > Without a central operator different users connected to a bus would need > some way of telling what the load on it in the near future would be, in > order to be ready to use it when it wasn't busy. Sure, it's called a scheduler. For schedulers like are in the kernel of an average Windows/MacOS X/Linux system one resource is made to look like many, and during the period that a user sees their resource, there will tend to be minimal contention for resources, although waste may still occur due to mismatches in latency between the different components in the system (as would also occur in the non-virtualized case). There are some costs to time slicing, in particular that CPU caches have to be invalidated on context switches, but the idea is to run long tasks enough to amortize these costs. For large scale compute environments, this is taken further to have a high level scheduler that operates on the time frame of days to months. Jobs run through such a system get the full machine for a long period of time without any friction. The policies for such a system are to some extent a subject for negotiation. In academic environments, it can be a matter of peer review and politics, i.e. people write a grants to get access to the queues. In commercial environments, scheduling can be market driven, or run at a fixed rate for CPU time / hour. > Would there be any way > for users to sense that other than to sense increase or decrease in > electrical load on the bus somehow? In open systems the usual way to tell > if something else is using a common resource is finding disturbances around > it, and signs of depletion in what's available for yourself. Bees might > skip flowers that have been recently visited, for example. > More in the philosophy of ethernet or software transactional memory -- wait for a conflict to occur and then retry.. Marcus ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
