[snip] > If we assume that all of our tables are filled up with zeroes for those > deleted columns, because that’s the default, this **wipes the whole table > clean**. > > How do the tests pass? Well the tests are in test_db_api->ArchiveTestCase, > and actually, they don’t. But they don’t fail every time, because the test > suite here runs with a database that is almost completely empty anyway, so > the broken archival routine doesn’t find too many rows to blow away except > for the rows in “instance_types”, which it only finds sometimes because the > tests are only running it with a small number of things to delete and the > order of the tables is non-deterministic. > > I’ve posted the bug report at https://bugs.launchpad.net/nova/+bug/1431571 > where I started out not knowing much about how this worked except that my > tests were failing, and slowly stumbled my way to come to this conclusion. A > patch is at https://review.openstack.org/#/c/164009/, where we look at the > actual Python-side default. However I’d recommend that we just hardcode the > zero here, since that’s how our soft-delete columns work.
Hi Mike, Thanks for the investigation. I was wondering when that behavior was introduced and it seems that http://git.openstack.org/cgit/openstack/nova/commit/?id=ecf74d4c0a5a8a4290ecac048fc437dafe3d40fc is the likely culprit, which would mean that only Kilo is affected. Can you confirm? Thanks, -- Thomas __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
