liebana wrote: > @sharoonthomas > > If understand well, I think you are right and migration can be "easier", > following the example you provided. But, that shows a simple renaming of a > var (packing renamed into shipment), we have to handle a whole migration > carefully. > > I think we have to study it deeply, and also I know that Akretion is working > on a simple tool to truly know diffs between a v5 a v6 instance, this would > help a lot. >
A simple example that shows how tryton migration system is not a general solution would be something like this: - Model M has field F of type integer in module 1 in version 5.0 - Module 2 inherits the same field F and makes it a VARCHAR in version 5.0 - Module 1 has some changes in version 6.0 that make field F integer and in the migration process we need to multiply it by 100 (previously its range was from 0.0 to 1.0 but now they decided 0 to 100 is more intuitive and decimals are no longer needed). The migration process of module 1 will try to convert existing field which is VARCHAR because module 2 was installed to integer and thus migration process will fail. The only way to avoid that would be that the inheriting module overrided auto_init() function and reimplemented all migration code. The only way I see that a migration process per module could more or less work, would be if each field had it's its import migration function. This way, the system would not try to migrate fields that are later inherited in another module. I also agree that doing schema changes directly is just what we want to avoid, otherwise why do we have an ORM? **Brainstorming Mode** Haven't thought a about it, but it just comes to my mind something that could be a start of a discussion to find out a solution. Let's say that fields, as mentioned can optionally have a migration function. This means that fields that do not have such a function are migrated AS IS. The rest of the fields could have something like this: class res_partner(osv.osv): _inherit = 'res.partner' def _migrate_name(self, cr, old_tables, context): # In the migration process OpenERP framework would rename all # existing (previous version) tables and create the new tables with # with the correct names. The reference to old table names would # be passed as a parameter to the migration function of each field. # This would allow content from model A to be moved to model B # in a newer version. old_table_name = old_tables[self._table_name] cr.execute("SELECT id, name FROM %s" % old_table_name) for record in cr.fetchall(): try: name_int = int(record[1]) except: name_int = False self.write(cr, uid, [record[0]], { 'name': name_int, }, context) _columns = { 'name': fields.integer('Name', migration=_migrate_name), } res_partner() What do you think? Maybe this could solve many cases. I'd vote also for having a general '_migrate()' function for the model for some rare cases. Also a general '_migrate()' function would be needed so, standard behaviour of openerp for migration would be to call this _migrate() function for each model. The implementation of this function in osv.orm would be to create a record in the new table for each record of the old one. If all fields in the model have a migration function, _migrate() would just create the records with 'id' filled in. This would mean that all NOT NULL attributes should be executed at the very end of the migration process. The system would also need a system to know in what order fields should be processed. (For example, to migrate 'total' you may need 'base_amount' to be migrated, first). The same could be true between models. And probably some more complex needs will arise with the discussion. More ideas? ------------------------ Albert Cervera i Areny http://www.NaN-tic.com OpenERP Partners http://twitter.com/albertnan -------------------- m2f -------------------- -- http://www.openobject.com/forum/viewtopic.php?p=61330#61330 -------------------- m2f -------------------- _______________________________________________ Tinyerp-users mailing list http://tiny.be/mailman2/listinfo/tinyerp-users