Hello again,

just in case someone runs into a similar issue in the future:  We
solved it by creating a second resource group 'vmstore' (using the same
storage pool), creating a volume group 'vmstore' and then changing
the resource group for all resource definitions to 'vmstore'.  Then
We had to adjust the 'resourcegroup' in /etc/pve/storage.cfg to
'vmstore'.

In terms of commands:

# linstor resource-group create --storage-pool drbdpool --place-count 2 vmstore
# linstor volume-group create vmstore
# linstor resource-group list
# linstor rd modify --resource-group vmstore vm-100-disk-0
# # etc for any other resources.
# linstor resource-definition list

Then we adjusted /etc/pve/storage.cfg to contain:
...
drbd: drbdstor-1
        content rootdir,images
        controller 10.0.7.32
        controllervm 100
        resourcegroup vmstore
        preferlocal yes
        nodes pmbase1,pmbase2
...
and restarted the pvedaemon.service.

Now it is possible to create and delete hard disk volumes again using
the proxmox web interface with linstor-proxmox.

Cheers,

w.w.

On Wed, 15 Sep 2021, Wolfgang Walkowiak wrote:


Hello,

we are recently experiencing an issue which prevents us from creating
storage volumes for proxmox VMs using the linstor-proxmox plugin.
We had been able to create them in the past, but now it stopped working/
(We upgrade both proxmox and the linstor/drbd software regularly.)

It appears to relate to something with volumes:

Message: Could not create resource definition vm-101-disk-1 from resource group DfltRscGrp, because: [{"ret_code":-4611686018406154012,"message":"Invalid count of volume sizes to spawn resource group 'DfltRscGrp'","correction":"Either provide the correct count of volume sizes or use the 'partial' option","details":"The resource group 'DfltRscGrp' has 0 Volume groups, but only 1 sizes were provided.
....
[See below for the full message.]

The entry we have in /etc/pve/storage.cfg:

drbd: drbdstor-1
       content rootdir,images
       controller 10.0.7.32
       controllervm 100
       resourcegroup DfltRscGrp
       preferlocal yes
       nodes pmbase1,pmbase2

There is already a resource defined (done > 1 year ago):
+------------------------------------------------------------------------+
| ResourceName  | Node    | Port | Usage  | Conns |    State | CreatedOn |
|========================================================================|
| vm-100-disk-0 | pmbase1 | 7000 | Unused | Ok    | UpToDate |           |
| vm-100-disk-0 | pmbase2 | 7000 | InUse  | Ok    | UpToDate |           |
+------------------------------------------------------------------------+

Today, I deleted some VMs and their volumes and just wanted to create
new ones.

Software versions:

pve-manager/6.4-13/9f411e79 (running kernel: 5.4.140-1-pve)

linstor-client                       1.10.1-1
linstor-common 1.14.0-1 linstor-controller 1.14.0-1 linstor-proxmox 5.2.1-1 linstor-satellite 1.14.0-1
python-linstor                       1.10.1-1

Please advise how we may correct the situation.

Thank you very much,

w.w.

Issue as it appears in syslog:
==============================
Sep 15 14:16:03 pmbase1 pvedaemon[2577]: unable to create VM 101 - API Return-Code: 500. Message: Could not create resource definition vm-101-disk-1 from resource group DfltRscGrp, because: [{"ret_code":-4611686018406154012,"message":"Invalid count of volume sizes to spa wn resource group 'DfltRscGrp'","correction":"Either provide the correct count of volume sizes or use the 'partial' option","details":"The resource group 'DfltRscGrp' has 0 Volume groups, but only 1 sizes were provided.\nResource group: DfltRscGrp","obj_refs":{"RscDfn":" vm-101-disk-1","RscGrp":"DfltRscGrp"}}] at /usr/share/perl5/PVE/Storage/Custom/LINSTORPlugin.pm line 363. #011PVE::Storage::Custom::LINSTORPlugin::alloc_image("PVE::Storage::Custom::LINSTORPlugin", "drbdstor-1", HASH(0x55e84ee08b60), 101, "raw", undef, 33554432) called at /usr/share/perl5/PVE/Storage.pm line 896 #011eval {...} called at /usr/share/perl5/PVE/Storage.pm line 896 #011PVE::Storage::__ANON__() called at /usr/share/perl5/PVE/Cluster.pm line 621 #011eval {...} called at /usr/share/perl5/PVE/Cluster.pm line 587 #011PVE::Cluste r::__ANON__("storage-drbdstor-1", undef, CODE(0x55e84e53ca18)) called at /usr/share/perl5/PVE/Cluster.pm line 666 #011PVE::Cluster::cfs_lock_storage("drbdstor-1", undef, CODE(0x55e84e53ca18)) called at /usr/share/perl5/PVE/Storage/Plugin.pm line 478 #011PVE::Storage::Plu gin::cluster_lock_storage("PVE::Storage::Custom::LINSTORPlugin", "drbdstor-1", 1, undef, CODE(0x55e84e53ca18)) called at /usr/share/perl5/PVE/Storage.pm line 901 #011PVE::Storage::vdisk_alloc(HASH(0x55e84edfd200), "drbdstor-1", 101, "raw", undef, 33554432) called at /usr /share/perl5/PVE/API2/Qemu.pm line 188 #011PVE::API2::Qemu::__ANON__("scsi0", HASH(0x55e84ed8b148)) called at /usr/share/perl5/PVE/AbstractConfig.pm line 475 #011PVE::AbstractConfig::foreach_volume_full("PVE::QemuConfig", HASH(0x55e84ee34928), undef, CODE(0x55e84ef6e628) ) called at /usr/share/perl5/PVE/AbstractConfig.pm line 484 #011PVE::AbstractConfig::foreach_volume("PVE::QemuConfig", HASH(0x55e84ee34928), CODE(0x55e84ef6e628)) called at /usr/share/perl5/PVE/API2/Qemu.pm line 221 #011eval {...} called at /usr/share/perl5/PVE/API2/Qemu .pm line 221 #011PVE::API2::Qemu::__ANON__(PVE::RPCEnvironment=HASH(0x55e84ed7c868), "walkowia\@pam", HASH(0x55e84ee34928), "x86_64", HASH(0x55e84edfd200), 101, undef, HASH(0x55e84ee34928), ...) called at /usr/share/perl5/PVE/API2/Qemu.pm line 707 #011eval {...} called a t /usr/share/perl5/PVE/API2/Qemu.pm line 706 #011PVE::API2::Qemu::__ANON__() called at /usr/share/perl5/PVE/AbstractConfig.pm line 299 #011PVE::AbstractConfig::__ANON__() called at /usr/share/perl5/PVE/Tools.pm line 220 #011eval {...} called at /usr/share/perl5/PVE/Tools .pm line 220 #011PVE::Tools::lock_file_full("/var/lock/qemu-server/lock-101.conf", 1, 0, CODE(0x55e847afcb70)) called at /usr/share/perl5/PVE/AbstractConfig.pm line 302 #011PVE::AbstractConfig::__ANON__("PVE::QemuConfig", 101, 1, 0, CODE(0x55e848524908)) called at /usr/s hare/perl5/PVE/AbstractConfig.pm line 322 #011PVE::AbstractConfig::lock_config_full("PVE::QemuConfig", 101, 1, CODE(0x55e848524908)) called at /usr/share/perl5/PVE/API2/Qemu.pm line 747 #011PVE::API2::Qemu::__ANON__() called at /usr/share/perl5/PVE/API2/Qemu.pm line 777 #011eval {...} called at /usr/share/perl5/PVE/API2/Qemu.pm line 777 #011PVE::API2::Qemu::__ANON__("UPID:pmbase1:00000A11:007609F3:6141E402:qmcreate:101:walkowia"...) called at /usr/share/perl5/PVE/RESTEnvironment.pm line 615 #011eval {...} called at /usr/share/perl5/PVE/ RESTEnvironment.pm line 606 #011PVE::RESTEnvironment::fork_worker(PVE::RPCEnvironment=HASH(0x55e84ed7c868), "qmcreate", 101, "walkowia\@pam", CODE(0x55e84ee25d88)) called at /usr/share/perl5/PVE/API2/Qemu.pm line 789 #011PVE::API2::Qemu::__ANON__(HASH(0x55e84ee34928)) ca lled at /usr/share/perl5/PVE/RESTHandler.pm line 452 #011PVE::RESTHandler::handle("PVE::API2::Qemu", HASH(0x55e84c9664a8), HASH(0x55e84ee34928)) called at /usr/share/perl5/PVE/HTTPServer.pm line 178 #011eval {...} called at /usr/share/perl5/PVE/HTTPServer.pm line 139 #01 1PVE::HTTPServer::rest_handler(PVE::HTTPServer=HASH(0x55e84ed7c9a0), "::ffff:127.0.0.1", "POST", "/nodes/pmbase1/qemu", HASH(0x55e84ee31338), HASH(0x55e84ef6e8c8), "extjs") called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 877 #011eval {...} called at /usr/share/ perl5/PVE/APIServer/AnyEvent.pm line 851 #011PVE::APIServer::AnyEvent::handle_api2_request(PVE::HTTPServer=HASH(0x55e84ed7c9a0), HASH(0x55e84eebd3b0), HASH(0x55e84ee31338), "POST", "/api2/extjs/nodes/pmbase1/qemu") called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm lin e 1101 #011eval {...} called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1093 #011PVE::APIServer::AnyEvent::handle_request(PVE::HTTPServer=HASH(0x55e84ed7c9a0), HASH(0x55e84eebd3b0), HASH(0x55e84ee31338), "POST", "/api2/extjs/nodes/pmbase1/qemu") called at /usr/sh are/perl5/PVE/APIServer/AnyEvent.pm line 1500 #011PVE::APIServer::AnyEvent::__ANON__(AnyEvent::Handle=HASH(0x55e84ee096d0), "ide2=nfsmoroni6%3Aiso%2Fdebian-11.0.0-amd64-netinst.iso%2Cmed"...) called at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent/Handle.pm line 1505 #01 1AnyEvent::Handle::__ANON__(AnyEvent::Handle=HASH(0x55e84ee096d0)) called at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent/Handle.pm line 1315 #011AnyEvent::Handle::_drain_rbuf(AnyEvent::Handle=HASH(0x55e84ee096d0)) called at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent /Handle.pm line 2015 #011AnyEvent::Handle::__ANON__(EV::IO=SCALAR(0x55e84ef81168), 1) called at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent/Impl/EV.pm line 88 #011eval {...} called at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent/Impl/EV.pm line 88 #011AnyEvent::CondVa r::Base::_wait(AnyEvent::CondVar=HASH(0x55e84e549418)) called at /usr/lib/x86_64-linux-gnu/perl5/5.28/AnyEvent.pm line 2026 #011AnyEvent::CondVar::Base::recv(AnyEvent::CondVar=HASH(0x55e84e549418)) called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1814 #011PVE::A PIServer::AnyEvent::run(PVE::HTTPServer=HASH(0x55e84ed7c9a0)) called at /usr/share/perl5/PVE/Service/pvedaemon.pm line 52 #011PVE::Service::pvedaemon::run(PVE::Service::pvedaemon=HASH(0x55e84e46edf0)) called at /usr/share/perl5/PVE/Daemon.pm line 171 #011eval {...} calle d at /usr/share/perl5/PVE/Daemon.pm line 171 #011PVE::Daemon::__ANON__(PVE::Service::pvedaemon=HASH(0x55e84e46edf0)) called at /usr/share/perl5/PVE/Daemon.pm line 391 #011eval {...} called at /usr/share/perl5/PVE/Daemon.pm line 380 #011PVE::Daemon::__ANON__(PVE::Service: :pvedaemon=HASH(0x55e84e46edf0), undef) called at /usr/share/perl5/PVE/Daemon.pm line 552 #011eval {...} called at /usr/share/perl5/PVE/Daemon.pm line 550 #011PVE::Daemon::start(PVE::Service::pvedaemon=HASH(0x55e84e46edf0), undef) called at /usr/share/perl5/PVE/Daemon.pm line 661 #011PVE::Daemon::__ANON__(HASH(0x55e847af1fd0)) called at /usr/share/perl5/PVE/RESTHandler.pm line 452 #011PVE::RESTHandler::handle("PVE::Service::pvedaemon", HASH(0x55e84e46f138), HASH(0x55e847af1fd0), 1) called at /usr/share/perl5/PVE/RESTHandler.pm line 864 #011eval {...} called at /usr/share/perl5/PVE/RESTHandler.pm line 847 #011PVE::RESTHandler::cli_handler("PVE::Service::pvedaemon", "pvedaemon start", "start", ARRAY(0x55e847e28d78), ARRAY(0x55e84ed73eb8), undef, undef, undef) called at /usr/share/perl5/PVE/CLIHandler.pm line 591 #011PVE::CLIHandler::__ANON__(ARRAY(0x55e847af21f8), CODE(0x55e847e6fab8), undef) called at /usr/share/perl5/PVE/CLIHandler.pm line 668 #011PVE::CLIHandler::run_cli_handler("PVE::Service::pvedaemon", "prepare", CODE(0x55e847e6fab8)) called at /usr/bin/pvedaemon
line 27


==========================================================2021-09-15==14:42==

 Dr. Wolfgang Walkowiak     Phone: +49-271-740-3889
 Fakultaet IV / Physik      Fax  : +49-271-740-3886
 Emmy Noether Campus
 Universitaet Siegen    --> wolfgang.walkow...@hep.physik.uni-siegen.de
 Walter-Flex-Str. 3         wolfgang.walkow...@gmx.net
 57068 Siegen
 Germany

=============================================================================
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user



==========================================================2021-09-16==11:17==

  Dr. Wolfgang Walkowiak     Phone: +49-271-740-3889
  Fakultaet IV / Physik      Fax  : +49-271-740-3886
  Emmy Noether Campus
  Universitaet Siegen    --> wolfgang.walkow...@hep.physik.uni-siegen.de
  Walter-Flex-Str. 3         wolfgang.walkow...@gmx.net
  57068 Siegen
  Germany

=============================================================================
_______________________________________________
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to