Mark S. Miller wrote:
> However, many clients will engage in honest GC to keep their
requirements on
> service memory low. Many services will not need to cut such
clients off
> because of excessive resource demands.
Perhaps, but they still need to cut off bad clients, and even honest
clients in this kind of system can inadvertently hog server resources
simply by not doing GC for a while (because, for example, there isn't
memory pressure *on the client*). In these cases I'm not sure how the
server can tell which clients to cut off. It seems like it would
require automation of the sort of memory tooling you characterized as
experimental earlier in this thread.
That's fair. Driving distributed GC by observing local GC has exactly
the problem you point out: As we give one vat more memory, reducing
its memory pressure, it observes far fewer finalizations, increasing
the memory pressure on its counter-party vats. Perhaps the reason this
problem hasn't been too pressing in the past is that none of our vats
had enough local memory to cause the problem.
This seems like an important open question to research, since I doubt we
can constrain vats to small-local-memory _a prior_ given how much
dynamic range web content has (think Ember.js windowing into a large
database, building DOM view on the fly).
Jason's right about "leases" being predominant for distributed resource
management. These expire right in the user's face, too often. But they
are easier to model.
/be
_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss