Hi,
> (a) the implementation of an automatism is not *quite* what they need/want
> (b) they want to be able to manually select (or more likely override)
whether a file can be archived
Well, behind the scenes, we anyway need a way to move entries to / from cold
storage. But in my view, that's low-level API, and I wouldn't expose it first,
but instead concentrate on implementing an automatic solution, that has no API
(except for some config options). If it later turns out the low-level API is
needed, it can still be added. I wouldn't introduce that as public API right
from the start, just because we _think_ it _might_ be needed at some point
later. Because having to maintain the API is expensive.
What I would introduce right from the start is a way to measure which binaries
were read recently, and how frequently. But even for that, there is no public
API needed first (except for maybe logging some statistics).
> Thus I suggest to come up with a pluggable "strategy" interface
That is too abstract for me. I think it is very important to have a concreate
behaviour and API, otherwise discussing it is not possible.
> A much more important and difficult question to answer IMHO is how to deal
> with the slow retrieval of archived content.
My concrete suggestion would be, as I wrote: if it's in cold storage, throw an
exception saying so, and load the binary into hot storage. A few minutes later,
re-reading will not throw an exception as it's in hot storage. So, there is no
API change needed, except for a new exception class (subclass of
RepositoryException). An application can catch those exceptions and deal with
them in a special way (write that the binary is not currently available).
Possibly the new exception could have a method "doNotMoveBinary()" in case
moving is not needed, but by default the binary should be moved, so that old
applications don't have to be changed at all (backward compatibility).
What is your concrete suggestion?
Regards,
Thomas