** Description changed:

+ [ Impact ]
+ 
+ If somehow the quota usage in the database is different to the one
+ actually consumed for users, then the UI will show the wrong data.
+ 
+ "cinder-manage quota check" and "cinder-manage quota sync" both are
+ reporting the issue, and potentially fixing it, but the sync doesn't
+ actually update the values in the database.
+ 
+ This is therefore not giving the right information to end users and
+ suggesting  lower or higher usage of their quota
+ 
+ [ Test Plan ]
+ 
+ * Deploy an OpenStack cloud
+ * Create several volumes on the cloud
+ * Login in to SQL, and check the usage, and update to a lower level
+ 
+ mysql> SELECT * FROM cinder.quota_usages where 
project_id='e5f0212efb334c74bde7d06c1d8ecb39';
+ 
+---------------------+---------------------+------------+---------+----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
+ | created_at          | updated_at          | deleted_at | deleted | id | 
project_id                       | resource              | in_use | reserved | 
until_refresh | race_preventer |
+ 
+---------------------+---------------------+------------+---------+----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
+ | 2026-05-06 13:03:04 | 2026-05-06 13:14:02 | NULL       |       0 |  2 | 
e5f0212efb334c74bde7d06c1d8ecb39 | gigabytes             |    340 |        0 |  
        NULL |              1 |
+ | 2026-05-06 13:03:04 | 2026-05-06 13:14:02 | NULL       |       0 |  4 | 
e5f0212efb334c74bde7d06c1d8ecb39 | gigabytes___DEFAULT__ |    340 |        0 |  
        NULL |              1 |
+ | 2026-05-06 13:03:04 | 2026-05-06 13:05:54 | NULL       |       0 |  1 | 
e5f0212efb334c74bde7d06c1d8ecb39 | volumes               |     68 |        0 |  
        NULL |              1 |
+ | 2026-05-06 13:03:04 | 2026-05-06 13:05:54 | NULL       |       0 |  3 | 
e5f0212efb334c74bde7d06c1d8ecb39 | volumes___DEFAULT__   |     68 |        0 |  
        NULL |              1 |
+ 
+---------------------+---------------------+------------+---------+----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
+ 4 rows in set (0.00 sec)
+ 
+ mysql> update quota_usages set in_use=240 where 
project_id='e5f0212efb334c74bde7d06c1d8ecb39' and 
(resource="gigabytes___DEFAULT__" or resource="gigabytes");
+ Query OK, 2 rows affected (0.01 sec)
+ Rows matched: 2  Changed: 2  Warnings: 0
+ 
+ Now run the check and sync command
+ 
+ ubuntu@juju-434f5f-jammy-caracal-5:~$ sudo cinder-manage quota check 
--project-id e5f0212efb334c74bde7d06c1d8ecb39
+ 2026-05-06 13:13:47.204 99548 DEBUG oslo_db.sqlalchemy.engines [None 
req-d865e987-9790-4146-979e-76b0aa322f5a - - - - - -] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:342
+ Processing quota usage for project e5f0212efb334c74bde7d06c1d8ecb39
+         gigabytes: invalid usage saved=240 actual=340
+         gigabytes___DEFAULT__: invalid usage saved=240 actual=340
+ Action successfully completed
+ (failed reverse-i-search)`fdix': sudo cinder-manage quota check --project-id 
e5f0212e^C334c74bde7d06c1d8ecb39
+ (failed reverse-i-search)`fix': sudo cinder-manage quota check --project-id 
e5f0212e^C334c74bde7d06c1d8ecb39
+ ubuntu@juju-434f5f-jammy-caracal-5:~$ sudo cinder-manage quota sync 
--project-id e5f0212efb334c74bde7d06c1d8ecb39
+ 2026-05-06 13:14:01.833 99552 DEBUG oslo_db.sqlalchemy.engines [None 
req-7d7fec19-f9aa-4c69-a214-2cabebcdb1c7 - - - - - -] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:342
+ Processing quota usage for project e5f0212efb334c74bde7d06c1d8ecb39
+         gigabytes: invalid usage saved=240 actual=340 - fixed
+         gigabytes___DEFAULT__: invalid usage saved=240 actual=340 - fixed
+ Action successfully completed
+ 
+ This will not have actually changed and updated the values.
+ 
+ Install the updated package, and re-run the check and sync, it will
+ update the value as expected. You can do a final check, and this should
+ return something like the output below, which shows no changes are
+ needed.
+ 
+ ubuntu@juju-434f5f-jammy-caracal-5:~$ sudo cinder-manage quota check 
--project-id e5f0212efb334c74bde7d06c1d8ecb39
+ 2026-05-06 13:37:41.891 109215 DEBUG oslo_db.sqlalchemy.engines [None 
req-3ac289a4-62fc-45b9-8edf-b260a7c66765 - - - - - -] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:342
+ Processing quota usage for project e5f0212efb334c74bde7d06c1d8ecb39
+ Action successfully completed
+ 
+ 
+ [ Where problems could occur ]
+ 
+ The change could allow changes to the DB incorrectly, or another task
+ may be able to come in and potentially change these valuse. I don't
+ anticipate this to happen, but maybe a potential.
+ 
+ [ Other Info ]
+ 
  # Environment
  - OSP: zed
  - sample project: lms-test-test-test(76d24d96f319472ab56a76ab70a4f090)
  
  # test
  
  ```sh
  # login
  $ mysql -ucinder -p -hXXX.XXX.XXX
  $ MariaDB > use cinder;
  
  # get project cinder quota_usages 1
  $ MariaDB [cinder]> select * from quota_usages where project_id = 
'76d24d96f319472ab56a76ab70a4f090';
  
+---------------------+---------------------+------------+---------+-----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
  | created_at          | updated_at          | deleted_at | deleted | id  | 
project_id                       | resource              | in_use | reserved | 
until_refresh | race_preventer |
  
+---------------------+---------------------+------------+---------+-----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
  | 2024-08-07 05:10:13 | 2025-03-26 08:02:12 | NULL       |       0 | 154 | 
76d24d96f319472ab56a76ab70a4f090 | volumes               |      1 |        0 |  
        NULL |              1 |
  | 2024-08-07 05:10:13 | 2025-03-26 08:02:12 | NULL       |       0 | 157 | 
76d24d96f319472ab56a76ab70a4f090 | gigabytes             |      2 |        0 |  
        NULL |              1 |
  | 2024-08-07 05:10:13 | 2025-03-26 08:02:12 | NULL       |       0 | 160 | 
76d24d96f319472ab56a76ab70a4f090 | volumes___DEFAULT__   |      1 |        0 |  
        NULL |              1 |
  | 2024-08-07 05:10:13 | 2025-03-26 08:02:12 | NULL       |       0 | 163 | 
76d24d96f319472ab56a76ab70a4f090 | gigabytes___DEFAULT__ |      2 |        0 |  
        NULL |              1 |
  
+---------------------+---------------------+------------+---------+-----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
  
  # insert sample
  $ INSERT INTO volumes (
      created_at, updated_at, deleted_at, deleted, id, ec2_id,
      user_id, project_id, host, size, availability_zone, status,
      attach_status, scheduled_at, launched_at, terminated_at,
      display_name, display_description, provider_location, provider_auth,
      snapshot_id, volume_type_id, source_volid, bootable, provider_geometry,
      _name_id, encryption_key_id, migration_status, replication_status,
      replication_extended_status, replication_driver_data, consistencygroup_id,
      provider_id, multiattach, previous_status, cluster_name, group_id,
      service_uuid, shared_targets, use_quota
  )
  SELECT
      created_at, updated_at, deleted_at, deleted,
      '053164f4-7be3-4001-a481-d1c05a1b399',  -- 새 id 생성
      ec2_id, user_id
      , project_id
      , host, size, availability_zone, status,
      attach_status, scheduled_at, launched_at, terminated_at,
      display_name, display_description, provider_location, provider_auth,
      snapshot_id, volume_type_id, source_volid, bootable, provider_geometry,
      _name_id, encryption_key_id, migration_status, replication_status,
      replication_extended_status, replication_driver_data, consistencygroup_id,
      provider_id, multiattach, previous_status, cluster_name, group_id,
      service_uuid, shared_targets, use_quota
  FROM volumes
  WHERE deleted = 0 and status = 'available' and project_id = 
'76d24d96f319472ab56a76ab70a4f090'
  LIMIT 1;
  
  # get project cinder quota_usages 2 (not changed yet)
  MariaDB [cinder]> select * from volumes where id = 
'053164f4-7be3-4001-a481-d1c05a1b399';
  
+---------------------+---------------------+------------+---------+-------------------------------------+--------+----------------------------------+----------------------------------+--------------------------+------+-------------------+-----------+---------------+---------------------+---------------------+---------------+--------------+---------------------+--------------------------+---------------+-------------+--------------------------------------+--------------+----------+-------------------+----------+-------------------+------------------+--------------------+-----------------------------+-------------------------+---------------------+-------------+-------------+-----------------+--------------+----------+--------------------------------------+----------------+-----------+
  | created_at          | updated_at          | deleted_at | deleted | id       
                           | ec2_id | user_id                          | 
project_id                       | host                     | size | 
availability_zone | status    | attach_status | scheduled_at        | 
launched_at         | terminated_at | display_name | display_description | 
provider_location        | provider_auth | snapshot_id | volume_type_id         
              | source_volid | bootable | provider_geometry | _name_id | 
encryption_key_id | migration_status | replication_status | 
replication_extended_status | replication_driver_data | consistencygroup_id | 
provider_id | multiattach | previous_status | cluster_name | group_id | 
service_uuid                         | shared_targets | use_quota |
  
+---------------------+---------------------+------------+---------+-------------------------------------+--------+----------------------------------+----------------------------------+--------------------------+------+-------------------+-----------+---------------+---------------------+---------------------+---------------+--------------+---------------------+--------------------------+---------------+-------------+--------------------------------------+--------------+----------+-------------------+----------+-------------------+------------------+--------------------+-----------------------------+-------------------------+---------------------+-------------+-------------+-----------------+--------------+----------+--------------------------------------+----------------+-----------+
  | 2025-03-26 08:02:12 | 2025-03-26 08:02:15 | NULL       |       0 | 
053164f4-7be3-4001-a481-d1c05a1b399 | NULL   | c7a26c5ad639484db777de6905d6a90d 
| 76d24d96f319472ab56a76ab70a4f090 | hci-01@nfs_test#nfs_test |    2 | nova     
         | available | detached      | 2025-03-26 08:02:13 | 2025-03-26 
08:02:15 | NULL          | test         |                     | 
172.17.21.91:/mnt/cinder | NULL          | NULL        | 
20ae6451-0f17-4911-aa6d-76b970e9a942 | NULL         |        0 | NULL           
   | NULL     | NULL              | NULL             | NULL               | 
NULL                        | NULL                    | NULL                | 
NULL        |           0 | NULL            | NULL         | NULL     | 
12fd13d2-303e-41c5-82e6-96ac32c7ec2c |              0 |         1 |
  
+---------------------+---------------------+------------+---------+-------------------------------------+--------+----------------------------------+----------------------------------+--------------------------+------+-------------------+-----------+---------------+---------------------+---------------------+---------------+--------------+---------------------+--------------------------+---------------+-------------+--------------------------------------+--------------+----------+-------------------+----------+-------------------+------------------+--------------------+-----------------------------+-------------------------+---------------------+-------------+-------------+-----------------+--------------+----------+--------------------------------------+----------------+-----------+
  
  # RUN cinder-manage quota sync (Looks like a success)
  $ ./cinder-manage --debug --log-file test.log quota sync --project-id 
76d24d96f319472ab56a76ab70a4f090
  Processing quota usage for project 76d24d96f319472ab56a76ab70a4f090
          volumes: invalid usage saved=1 actual=2 - fixed
          gigabytes: invalid usage saved=1 actual=4 - fixed
          volumes___DEFAULT__: invalid usage saved=1 actual=2 - fixed
          gigabytes___DEFAULT__: invalid usage saved=1 actual=4 - fixed
          volumes_rbd_volumes: invalid usage saved=1 actual=0 - fixed
          gigabytes_rbd_volumes: invalid usage saved=1 actual=0 - fixed
  Action successfully completed
  
  # get project cinder quota_usages 3 (Still not changed yet!!)
  MariaDB [cinder]> select * from volumes where id = 
'053164f4-7be3-4001-a481-d1c05a1b399';
  
+---------------------+---------------------+------------+---------+-------------------------------------+--------+----------------------------------+----------------------------------+--------------------------+------+-------------------+-----------+---------------+---------------------+---------------------+---------------+--------------+---------------------+--------------------------+---------------+-------------+--------------------------------------+--------------+----------+-------------------+----------+-------------------+------------------+--------------------+-----------------------------+-------------------------+---------------------+-------------+-------------+-----------------+--------------+----------+--------------------------------------+----------------+-----------+
  | created_at          | updated_at          | deleted_at | deleted | id       
                           | ec2_id | user_id                          | 
project_id                       | host                     | size | 
availability_zone | status    | attach_status | scheduled_at        | 
launched_at         | terminated_at | display_name | display_description | 
provider_location        | provider_auth | snapshot_id | volume_type_id         
              | source_volid | bootable | provider_geometry | _name_id | 
encryption_key_id | migration_status | replication_status | 
replication_extended_status | replication_driver_data | consistencygroup_id | 
provider_id | multiattach | previous_status | cluster_name | group_id | 
service_uuid                         | shared_targets | use_quota |
  
+---------------------+---------------------+------------+---------+-------------------------------------+--------+----------------------------------+----------------------------------+--------------------------+------+-------------------+-----------+---------------+---------------------+---------------------+---------------+--------------+---------------------+--------------------------+---------------+-------------+--------------------------------------+--------------+----------+-------------------+----------+-------------------+------------------+--------------------+-----------------------------+-------------------------+---------------------+-------------+-------------+-----------------+--------------+----------+--------------------------------------+----------------+-----------+
  | 2025-03-26 08:02:12 | 2025-03-26 08:02:15 | NULL       |       0 | 
053164f4-7be3-4001-a481-d1c05a1b399 | NULL   | c7a26c5ad639484db777de6905d6a90d 
| 76d24d96f319472ab56a76ab70a4f090 | hci-01@nfs_test#nfs_test |    2 | nova     
         | available | detached      | 2025-03-26 08:02:13 | 2025-03-26 
08:02:15 | NULL          | test         |                     | 
172.17.21.91:/mnt/cinder | NULL          | NULL        | 
20ae6451-0f17-4911-aa6d-76b970e9a942 | NULL         |        0 | NULL           
   | NULL     | NULL              | NULL             | NULL               | 
NULL                        | NULL                    | NULL                | 
NULL        |           0 | NULL            | NULL         | NULL     | 
12fd13d2-303e-41c5-82e6-96ac32c7ec2c |              0 |         1 |
  
+---------------------+---------------------+------------+---------+-------------------------------------+--------+----------------------------------+----------------------------------+--------------------------+------+-------------------+-----------+---------------+---------------------+---------------------+---------------+--------------+---------------------+--------------------------+---------------+-------------+--------------------------------------+--------------+----------+-------------------+----------+-------------------+------------------+--------------------+-----------------------------+-------------------------+---------------------+-------------+-------------+-----------------+--------------+----------+--------------------------------------+----------------+-----------+
  ```
  
  # find bug
  - [git > cinder > 
cinder/cmd/manage.py#L512](https://opendev.org/openstack/cinder/src/commit/8d17d5b8b825f171e341c8bff07d8cd0d6162024/cinder/cmd/manage.py#L512)
  - "cinder-manage quota check" and "cinder-manage quota sync" uses 
"@db_api.main_context_manager.reader" annotation.
  - "@db_api.main_context_manager.reader" annotation does not perform a commit.
  
  # test (fix source and test)
  ```sh
  # fix source
  $ cat 
/openstack/venvs/cinder-XX.X.X/lib/python3.10/site-packages/cinder/cmd/manage.py
      @db_api.main_context_manager.writer #@db_api.main_context_manager.reader
      def _check_project_sync(self, context, project, do_fix, resources):
  
  # add(for python running)
  if __name__ == '__main__':
      import sys
      sys.argv = [
          'manage.py',
          '--debug',
          '--log-file',
          'test.log',
          'quota',
          'sync',
          '--project-id', '76d24d96f319472ab56a76ab70a4f090'
      ]
  
  $ /openstack/venvs/cinder-XX.X.X/bin# . activate
  (cinder-27.1.0) 
root@hci-01:/openstack/venvs/cinder-XX.X.X/lib/python3.10/site-packages/cinder/cmd#
 python3 manage.py
  
  # get project cinder quota_usages 4 (finally changed!)
  MariaDB [cinder]> select * from quota_usages where project_id = 
'76d24d96f319472ab56a76ab70a4f090';
  
+---------------------+---------------------+------------+---------+-----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
  | created_at          | updated_at          | deleted_at | deleted | id  | 
project_id                       | resource              | in_use | reserved | 
until_refresh | race_preventer |
  
+---------------------+---------------------+------------+---------+-----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
  | 2024-08-07 05:10:13 | 2025-03-26 09:58:45 | NULL       |       0 | 154 | 
76d24d96f319472ab56a76ab70a4f090 | volumes               |      2 |        0 |  
        NULL |              1 |
  | 2024-08-07 05:10:13 | 2025-03-26 09:58:45 | NULL       |       0 | 157 | 
76d24d96f319472ab56a76ab70a4f090 | gigabytes             |      4 |        0 |  
        NULL |              1 |
  | 2024-08-07 05:10:13 | 2025-03-26 09:58:45 | NULL       |       0 | 160 | 
76d24d96f319472ab56a76ab70a4f090 | volumes___DEFAULT__   |      2 |        0 |  
        NULL |              1 |
  | 2024-08-07 05:10:13 | 2025-03-26 09:58:45 | NULL       |       0 | 163 | 
76d24d96f319472ab56a76ab70a4f090 | gigabytes___DEFAULT__ |      4 |        0 |  
        NULL |              1 |
  | 2025-03-26 08:15:39 | 2025-03-26 09:58:45 | NULL       |       0 | 213 | 
76d24d96f319472ab56a76ab70a4f090 | volumes_rbd_volumes   |      0 |        0 |  
        NULL |              1 |
  | 2025-03-26 08:15:39 | 2025-03-26 09:58:45 | NULL       |       0 | 215 | 
76d24d96f319472ab56a76ab70a4f090 | gigabytes_rbd_volumes |      0 |        0 |  
        NULL |              1 |
  
+---------------------+---------------------+------------+---------+-----+----------------------------------+-----------------------+--------+----------+---------------+----------------+
  6 rows in set (0.001 sec)
  
  ```

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2104322

Title:
  cinder-manager quota sync is not working

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2104322/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to