Phillip J. Eby wrote:
At 03:01 PM 2/10/2006 -0800, Katie Capps Parlante wrote:
(8) There should be some path for extension parcels to have data migrated along with the ootb schemas, but it can require extra work
 from the parcel writers to make it happen.

I presume this means that it's okay for the migration to take place
via the same export/import process, given a suitable format?

Yes, assuming we choose an export/import process. I should say that I
don't want to jump the gun and presume export/import is the way to go,
and that we should give up on smooth upgrade. I wanted to be clear about
the minimum requirements from the app perspective.

I'm noticing a slight mismatch here between app and platform needs; export/import makes sense for a slow-moving app development, in that
 there are milestones at which it's reasonable to dump and reload
your repository.  However, this won't be particularly practical on
"internet time" for extension parcels being rapidly and iteratively
developed. For that matter, it's not necessarily all that practical
for us while *we're* developing, except that we routinely create new
repositories. Smooth upgrades would help our development as well, so
it's not as though the goals are in conflict; the platform just has
more stringent technical requirements (i.e. supporting piecemeal
upgrades and rollbacks) than the app as a whole.

Agreed. My list of requirements was meant to frame the app needs, and
didn't quite capture the platform needs. We could set explicit platform
goals.

* How will we ensure (procedurally or otherwise) that each
version of Chandler will successfully upgrade from older
versions?

I'm not sure I fully understand this question. Is this question
about testing?

Yes.  What isn't tested, probably doesn't work.  :)

Ok. Yes, we should have a test plan. Yes, this will require time/work
from more people than just you the driver. Yes, we will allocate time in
the schedule for that work, once we have a proposal/plan. (In other
words, I am committed to making sure that we have resources to support
this, organizationally).

* What support will we provide for parcel developers to ensure
that *their* schemas upgrade safely, as well as the Chandler "out
of the box" schemas?

I think that we need to provide hooks so that parcel developers can
hook into the same ui. If we ask the user to do a manual step to export/import, the parcel developer should be able to get data into
 that  same exported file (or directory or whatever). If we have an
 automated step at startup, the parcel developer should be able to
take advantage of that same automated step.

I think it reasonable to ask the parcel developer to do some work,
 define an external format, etc.

Me too, but unfortunately it's not that simple.  If upgrading schema
means dumping and reloading, then it means losing anything that *doesn't* have an external format, as well as being time-consuming if
 you've got a lot of data.  That means that for extension parcels, a
 dump-and-reload approach doesn't seem practical.  But, if it's okay
for it to be like this in 0.7, then I guess we're okay.

Right, I guess I was thinking...

If the parcel developer wants their parcel data to be shared (ultimately
by multiple versions of chandler), there might be some burden on them to
define the external format. This burden should be similar to the burden
we face when developing our ootb schemas. I'm totally handwaving here
because we don't yet have a proposal for how to do this and I don't know
how flexible this can/will be. I'm not imagining that the parcel developer needs to go define some equivalent to ics for their parcel, I'm assuming that we come up with some sort of more general scheme.

If we support more seamless upgrade by providing hooks for parcels to
munge data at startup (or whatever), then the extension parcels should
get the same hooks and the parcel developer might be required to do some
work to munge the data. Yes, it would be nicer to do something more
automatic, but that is not required for 0.7. In other words, I don't think external parcel upgrade is the high priority goal for 0.7. (We have more important fish to fry getting the domain model right, getting code in the right modules, etc.)

Pulling from a different email in this thread:
I don't know.  I don't know what kind of changes we're going to have.
The biggest question of all is, when does this discipline begin?  If
we don't need to support upgrades before the release of 0.7, then
there's a lot less to be done, and it's not certain that we need to
provide any significant evolution infrastructure until 0.8.  For one
thing, we can try to complete major moves before then, and we can
make an effort to document and prepare for the freezing of key
schemas.

If we can meet the application needs with export/import, then yes, we probably don't need to support more seamless upgrades before the release of 0.7. Its probably more important to focus on getting the domain model and other apis right (complete major moves), and set the 'seamless' goal for the 0.8 timeframe.

This is a bit different from our original thinking so I'm interested in other opinions here. :)

* When an upgrade has to be reverted, what guarantees should we
give the user for being able to revert safely without losing
data?

Is it reasonable to assume that a backup externalization is good
enough?

Only if that process as a whole is good enough for 0.7.  If so, then
 yes, we can make attempt to make upgrades an all-or-nothing process.
 What we aren't necessarily going to be able to do is support
seamless download-and-run extension parcel upgrades that involve
schema changes. But I can live with that, it just means that the
parcel upgrade process is going to be more complex.

Yes, that is maybe one of the key decisions here. It is probably good enough to support the end user. If it would help, we could have another round (of email discussion) framing platform requirements, keeping in mind that we have some flexibility on what we slot into 0.7 vs 0.8.

What I'm thinking I'll do at this point is add support for the easy stuff now, and then we'll have to come up with some kind of transactional wrapper to do code and data upgrades at once. How that
 will work (and even how we want it to) is kind of underspecified at
the moment.  :(

Sounds reasonable.

Cheers,
Katie

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev

Reply via email to