My 2 cents:

i understand that people want to have backwards compatibility, so they
could read even 10 years old data from files.

now, it could mean a lot of effort to do that.
For instance, if your binary model is spawned along 30 classes, then
in order to support 2 versions of it,
you need another 30 classes..
3 versions - 90 classes, and so on.


So i am thinking that a least painful way could be to create a
migration or "upgrade" scripts which people can use
to update their data from old format to new one, and put it in a
separate package, to not burden a main development line
with cruft.

Then it will be like a patch system.

This is how i imagine it:
suppose that you having 2.0 version installed in your system, but want
to load 1.6 version data.
you load an additional package(s) titled like:
migrating 1.6 -> 1.7
migrating 1.7 -> 1.8
migrating 1.8 -> 2.0

and then run migration scripts for your data until you get to the
current version.
Once you done, you can unload this stuff and be happy.

And i think it is also cheaper for developers: it takes an effort to
write a new migration procedure every time you changing a format,
but you don't have to keep everything in one place and change a model
to be compatible with everything.
A migration scripts should do their work, while core model and
implementation can be kept clean from cruft.

-- 
Best regards,
Igor Stasenko.

Reply via email to