On 1/4/21 1:42 PM, Andrija Panic wrote:
> Side question, has anyone (Wido, Garbiel) ever tested Ceph 15.x to work
> with any CloudStack version so far?
> 

Yes. Running it in production on Ubuntu 18.04 hypervisors and Ceph servers.

This is with CloudStack 4.13.1

Wido

> 
> On Mon, 4 Jan 2021 at 13:13, Wido den Hollander <w...@widodh.nl> wrote:
> 
>>
>>
>> On 1/4/21 12:25 PM, li jerry wrote:
>>> Hi Rohit and Wido
>>>
>>>
>>> According to the document description, I re-tested adding RBD primary
>> storage
>>> monitor: 10.100.250.14:6789
>>>
>>> (createStoragePool api :
>>>   url: rbd://hyperx:AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==@
>> 10.100.250.14:6789/rbd
>>> )
>>> The primary storage is added successfully.
>>>
>>> But, now there are new problems.
>>>
>>> Error when executing template copy from secondary to primary storage
>> (rbd)
>>> (This operation is creating system VM SSVM/CPVM)
>>>
>>> Here is the error message:
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) copyPhysicalDisk: disk
>> size:(356.96 MB) 374304768, virtualsize:(2.44 GB) 2621440000 format:qcow2
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) The source image is not RBD,
>> but the destination is. We will convert into RBD format 2
>>> 2021-01-04 11:20:26,302 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Starting copy from source
>> image
>> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>> to RBD image rbd/40165b83-896c-4693-abe7-9fd96b40ce9a
>>> 2021-01-04 11:20:26,302 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Executing: qemu-img convert
>> -O raw
>> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30
>>> 2021-01-04 11:20:26,303 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Executing while with timeout
>> : 10800000
>>> 2021-01-04 11:20:26,383 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Exit value is 1
>>> 2021-01-04 11:20:26,383 DEBUG [utils.script.Script]
>> (agentRequest-Handler-2:null) (logid:587f5b34) qemu-img:
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30:
>> error while converting raw: invalid conf option 6789:auth_supported: No
>> such file or directory
>>
>> There seems to be a double-escape here. That might the culprit.
>>
>> 'mon_host=10.100.250.14\:6789:auth_supported=cephx:id=hyperx'
>>
>> It might be that it needs to be that string.
>>
>> Wido
>>
>>> 2021-01-04 11:20:26,384 ERROR [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Failed to convert from
>> /mnt/466f03d4-9dfe-3af4-a042-33a00dae0e97/40165b83-896c-4693-abe7-9fd96b40ce9a.qcow2
>> to
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30
>> the error was: qemu-img:
>> rbd:rbd/40165b83-896c-4693-abe7-9fd96b40ce9a:mon_host=10.100.250.14\\:6789:auth_supported=cephx:id=hyperx:key=AQAywfFf8jCiIxAAbnDBjX1QQAO9Sj22kUBh7g==:rbd_default_format=2:client_mount_timeout=30:
>> error while converting raw: invalid conf option 6789:auth_supported: No
>> such file or directory
>>> 2021-01-04 11:20:26,384 INFO  [kvm.storage.LibvirtStorageAdaptor]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Attempting to remove storage
>> pool 466f03d4-9dfe-3af4-a042-33a00dae0e97 from libvirt
>>> 2021-01-04 11:20:26,384 DEBUG [kvm.resource.LibvirtConnection]
>> (agentRequest-Handler-2:null) (logid:587f5b34) Looking for libvirtd
>> connection at: qemu:///system
>>>
>>>
>>> -----邮件原件-----
>>> 发件人: Rohit Yadav <rohit.ya...@shapeblue.com>
>>> 发送时间: 2021年1月4日 19:09
>>> 收件人: Wido den Hollander <w...@widodh.nl>; dev@cloudstack.apache.org;
>> us...@cloudstack.apache.org; Gabriel Beims Bräscher <gabr...@pcextreme.nl>;
>> Wei ZHOU <ustcweiz...@gmail.com>; Daan Hoogland <
>> daan.hoogl...@shapeblue.com>
>>> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>
>>> Jerry, Wido, Daan - kindly review
>> https://github.com/apache/cloudstack-documentation/pull/175/files
>>>
>>>
>>> Regards.
>>>
>>> ________________________________
>>> From: Rohit Yadav <rohit.ya...@shapeblue.com>
>>> Sent: Monday, January 4, 2021 16:25
>>> To: Wido den Hollander <w...@widodh.nl>; dev@cloudstack.apache.org <
>> dev@cloudstack.apache.org>; us...@cloudstack.apache.org <
>> us...@cloudstack.apache.org>; Gabriel Beims Bräscher <gabr...@pcextreme.nl>;
>> Wei ZHOU <ustcweiz...@gmail.com>; Daan Hoogland <
>> daan.hoogl...@shapeblue.com>
>>> Subject: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>
>>> Great thanks for replying Wido. @Daan Hoogland<mailto:
>> daan.hoogl...@shapeblue.com> I think we can continue with RC3 vote/tally,
>> I'll send a doc PR.
>>>
>>>
>>> Regards.
>>>
>>> ________________________________
>>> From: Wido den Hollander <w...@widodh.nl>
>>> Sent: Monday, January 4, 2021 14:35
>>> To: dev@cloudstack.apache.org <dev@cloudstack.apache.org>; Rohit Yadav <
>> rohit.ya...@shapeblue.com>; us...@cloudstack.apache.org <
>> us...@cloudstack.apache.org>; Gabriel Beims Bräscher <gabr...@pcextreme.nl>;
>> Wei ZHOU <ustcweiz...@gmail.com>; Daan Hoogland <
>> daan.hoogl...@shapeblue.com>
>>> Subject: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>
>>>
>>>
>>> On 1/4/21 9:50 AM, Rohit Yadav wrote:
>>>> Thanks for replying Jerry - for now, the workaround you can use is to
>> specify the rados monitor port (such as 10.100.250.14:6789) in the UI
>> form when you add a ceph rbd pool. For example, via API the url parameter
>> would look like: "rbd://cephtest:AQC3u_JfhipzGBAACiILEFKembN8gTJsIvu6nQ==@
>> 192.168.1.10:6789/cephtest"
>>>>
>>>> @Daan Hoogland<mailto:daan.hoogl...@shapeblue.com> @Gabriel Beims
>> Bräscher<mailto:gabr...@pcextreme.nl> @Wido Hollander<mailto:
>> w...@pcextreme.nl> @Wei ZHOU<mailto:ustcweiz...@gmail.com> - the issue
>> seems to be rbd pool fails to be added if a port is not specified - what do
>> you think, should we treat this as blocker or document it (i.e. ask admins
>> to specify rados monitor port)?
>>>
>>> I would not say this is a blocker for now. Ceph is moving away from port
>>> 6789 as the default and libvirt is already handling this.
>>>
>>> This needs to be fixed though and I see that a ticket is open for this.
>>> I'll look into this with Gabriel.
>>>
>>> Keep in mind that port numer 6789 is not the default for Ceph! Messenger
>>> v2 uses port 3300 and therefor it's best not to specify any port and
>> have the Ceph client sort this out.
>>>
>>> In addition I would always suggest to use a hostname with Ceph and not a
>> static IP of a monitor. Round Robin DNS pointing to the monitors is the
>> most reliable solution.
>>>
>>> Wido
>>>
>>>>
>>>>
>>>> Regards.
>>>>
>>>> ________________________________
>>>> From: li jerry <div...@hotmail.com>
>>>> Sent: Monday, January 4, 2021 13:10
>>>> To: dev@cloudstack.apache.org <dev@cloudstack.apache.org>;
>>>> us...@cloudstack.apache.org <us...@cloudstack.apache.org>
>>>> Subject: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>>
>>>> Hi Rohit
>>>>
>>>> Yes, I didn't specify a port when I added primary storage;
>>>>
>>>> After r failed, I checked with virsh and found that the pool had been
>> created successfully, and the capacity, allocation and available of RBD
>> could be displayed.
>>>> So I'm sure it's not the wrong key.
>>>>
>>>>
>>>> Please note that:
>>>> In the output pool dump, I see that there is no port target under host
>>>> But the code gets port and converts it to int
>>>>
>>>> String port = Integer.parseInt(getAttrValue("host", "port", source));
>>>>
>>>>
>>>>
>>>> virsh pool-dumpxml d9b976cb-bcaf-320a-94e6-b337e65dd4f5
>>>> <pool type='rbd'>
>>>> <name>d9b976cb-bcaf-320a-94e6-b337e65dd4f5</name>
>>>> <uuid>d9b976cb-bcaf-320a-94e6-b337e65dd4f5</uuid>
>>>> <capacity unit='bytes'>12122373201920</capacity>
>>>> <allocation unit='bytes'>912457728</allocation>
>>>> <available unit='bytes'>11998204379136</available>
>>>> <source>
>>>> <host name='10.100.250.14'/>
>>>> <name>rbd</name>
>>>> <auth type='ceph' username='hyperx'>
>>>> <secret uuid='d9b976cb-bcaf-320a-94e6-b337e65dd4f5'/>
>>>> </auth>
>>>> </source>
>>>> </pool>
>>>>
>>>> -Jerry
>>>>
>>>> 发件人: Rohit Yadav<mailto:rohit.ya...@shapeblue.com>
>>>> 发送时间: 2021年1月4日 15:32
>>>> 收件人: us...@cloudstack.apache.org<mailto:us...@cloudstack.apache.org>;
>>>> dev@cloudstack.apache.org<mailto:dev@cloudstack.apache.org>
>>>> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>>
>>>> Hi Jerry,
>>>>
>>>> Can you see my reply? I'm able to add a RBD primary storage if I
>> specify the port, should we still consider it a blocker then?
>>>>
>>>>
>>>> Regards.
>>>>
>>>> ________________________________
>>>> From: li jerry <div...@hotmail.com>
>>>> Sent: Monday, January 4, 2021 12:52
>>>> To: us...@cloudstack.apache.org <us...@cloudstack.apache.org>
>>>> Cc: dev <dev@cloudstack.apache.org>
>>>> Subject: 回复: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>>
>>>> I'm creating PR to fix this.
>>>>
>>>> I think we should block, because it will cause the RBD primary storage
>> to be unable to be added.
>>>>
>>>> -----邮件原件-----
>>>> 发件人: Daan Hoogland <daan.hoogl...@gmail.com>
>>>> 发送时间: 2021年1月4日 14:57
>>>> 收件人: users <us...@cloudstack.apache.org>
>>>> 抄送: dev <dev@cloudstack.apache.org>
>>>> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>>
>>>> looks good Jerry,
>>>> Are you making a PR? It seems to me that this would not be a blocker
>> and should go in future releases. Please argue against me if you disagree.
>>>>
>>>>
>>>> rohit.ya...@shapeblue.com
>>>> www.shapeblue.com<http://www.shapeblue.com>
>>>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>>>> @shapeblue
>>>>
>>>>
>>>>
>>>>
>>>> rohit.ya...@shapeblue.com
>>>> www.shapeblue.com<http://www.shapeblue.com>
>>>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>>>> @shapeblue
>>>>
>>>>
>>>>
>>>
>>> rohit.ya...@shapeblue.com
>>> www.shapeblue.com<http://www.shapeblue.com>
>>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>>>
>>>
>>>
>>>
>>> rohit.ya...@shapeblue.com
>>> www.shapeblue.com<http://www.shapeblue.com>
>>> 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
>> @shapeblue
>>>
>>>
>>>
>>>> On Mon, Jan 4, 2021 at 6:48 AM li jerry <div...@hotmail.com> wrote:
>>>>
>>>>> - Is this a setup that does work with a prior version?
>>>>> - Did you fresh install or upgrade?
>>>>>
>>>>> No, This is a new deployment, there are no upgrades
>>>>>
>>>>> I have changed two methods. At present, RBD storage is running
>>>>>
>>>>>
>>>>>
>> /cloud-plugin-hypervisor-kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtStoragePoolXMLParser.java
>>>>> //                String _xmlPort =
>> Integer.parseInt(getAttrValue("host",
>>>>> "port", source));
>>>>>                 int port = 0;
>>>>>                 String _xmlPort = getAttrValue("host", "port", source);
>>>>>                 if ( ! _xmlPort.isEmpty()) {
>>>>>                 port = Integer.parseInt(_xmlPort);
>>>>>                 }
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>> /cloud-plugin-hypervisor-kvm/src/main/java/com/cloud/hypervisor/kvm/resource/LibvirtDomainXMLParser.java
>>>>> //                    int port = Integer.parseInt(getAttrValue("host",
>>>>> "port", disk));
>>>>>                     int port = 0;
>>>>>                     String _xmlPort = getAttrValue("host", "port",
>> disk);
>>>>>                     if ( ! _xmlPort.isEmpty()) {
>>>>>                         port = Integer.parseInt(_xmlPort);
>>>>>                     }
>>>>>
>>>>>
>>>>> -Jerry
>>>>>
>>>>> 发件人: Daan Hoogland<mailto:daan.hoogl...@gmail.com>
>>>>> 发送时间: 2021年1月4日 14:41
>>>>> 收件人: users<mailto:us...@cloudstack.apache.org>
>>>>> 抄送: dev<mailto:dev@cloudstack.apache.org>
>>>>> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>>>
>>>>> Jerry,
>>>>> - Is this a setup that does work with a prior version?
>>>>> - Did you fresh install or upgrade?
>>>>> @list is there any RDB user that can advise on the missing
>>>>> configuration causing the NumberFormatException, please?
>>>>>
>>>>>
>>>>> On Sun, Jan 3, 2021 at 1:25 PM li jerry <div...@hotmail.com> wrote:
>>>>>
>>>>>> Happy New Year to all.
>>>>>>
>>>>>>
>>>>>> Sorry, I can't add RBD primary storage when I deploy with 4.15 RC3
>>>>>>
>>>>>> CloudStack: 4.15 RC3
>>>>>>
>>>>>> OS : Ubuntu 20.04.01
>>>>>>
>>>>>> DB: MYSQL 8.0.22
>>>>>>
>>>>>> CEPH: 15.2.8
>>>>>>
>>>>>> libvirt:6.0.0
>>>>>> hypervisor: QEMU 4.2.1
>>>>>>
>>>>>>
>>>>>> Add main memory to report the following error:
>>>>>>
>>>>>> 2021-01-03 13:15:32,605 DEBUG [cloud.agent.Agent]
>>>>>> (agentRequest-Handler-2:null) (logid:0fd66f6e) Seq
>> 1-2968153629413867529:
>>>>>> { Ans: , MgmtId: 182719176, via: 1, Ver: v1, Flags: 10,
>>>>>>
>>>>> [{"com.cloud.agent.api.Answer":{"result":"true","details":"success","
>>>>> w
>>>>> ait":"0"}}]
>>>>>> }
>>>>>> 2021-01-03 13:15:32,631 DEBUG [cloud.agent.Agent]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Request:Seq
>>>>>> 1-2968153629413867530:  { Cmd , MgmtId: 182719176, via: 1, Ver: v1,
>>>>> Flags:
>>>>>> 100011,
>>>>>>
>>>>> [{"com.cloud.agent.api.ModifyStoragePoolCommand":{"add":"true","pool":
>>>>> {"id":"3","uuid":"d9b976cb-bcaf-320a-94e6-b337e65dd4f5","host":"10.10
>>>>> 0
>>>>> .250.14","path":"rbd","userInfo":"hyperx:AQAywfFf8jCiIxAAbnDBjX1QQAO9
>>>>> S
>>>>> j22kUBh7g==","port":"0","type":"RBD"},"localPath":"/mnt//5472031c-358
>>>>> 8 -3e2c-b106-74c8d9f4ca83","wait":"0"}}]
>>>>>> }
>>>>>> 2021-01-03 13:15:32,631 DEBUG [cloud.agent.Agent]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Processing command:
>>>>>> com.cloud.agent.api.ModifyStoragePoolCommand
>>>>>> 2021-01-03 13:15:32,632 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Attempting to create
>>>>> storage
>>>>>> pool d9b976cb-bcaf-320a-94e6-b337e65dd4f5 (RBD) in libvirt
>>>>>> 2021-01-03 13:15:32,632 DEBUG [kvm.resource.LibvirtConnection]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Looking for libvirtd
>>>>>> connection at: qemu:///system
>>>>>> 2021-01-03 13:15:32,654 WARN  [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Storage pool
>>>>>> d9b976cb-bcaf-320a-94e6-b337e65dd4f5 was not found running in libvirt.
>>>>> Need
>>>>>> to create it.
>>>>>> 2021-01-03 13:15:32,655 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Didn't find an
>>>>>> existing storage pool d9b976cb-bcaf-320a-94e6-b337e65dd4f5 by UUID,
>>>>>> checking for pools with duplicate paths
>>>>>> 2021-01-03 13:15:32,657 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Checking path of
>>>>>> existing pool root against pool we want to create
>>>>>> 2021-01-03 13:15:32,667 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Checking path of
>>>>>> existing pool 1739fc06-2a31-4af1-b8cb-871a27989f37 against pool we
>>>>>> want to create
>>>>>> 2021-01-03 13:15:32,672 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Attempting to create
>>>>> storage
>>>>>> pool d9b976cb-bcaf-320a-94e6-b337e65dd4f5
>>>>>> 2021-01-03 13:15:32,686 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) <secret ephemeral='no'
>>>>>> private='no'>
>>>>>> <uuid>d9b976cb-bcaf-320a-94e6-b337e65dd4f5</uuid>
>>>>>> <usage type='ceph'>
>>>>>> <name>hyperx@10.100.250.14:0/rbd</name>
>>>>>> </usage>
>>>>>> </secret>
>>>>>>
>>>>>> 2021-01-03 13:15:32,706 DEBUG [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) <pool type='rbd'>
>>>>>> <name>d9b976cb-bcaf-320a-94e6-b337e65dd4f5</name>
>>>>>> <uuid>d9b976cb-bcaf-320a-94e6-b337e65dd4f5</uuid>
>>>>>> <source>
>>>>>> <host name='10.100.250.14'/>
>>>>>> <name>rbd</name>
>>>>>> <auth username='hyperx' type='ceph'> <secret
>>>>>> uuid='d9b976cb-bcaf-320a-94e6-b337e65dd4f5'/>
>>>>>> </auth>
>>>>>> </source>
>>>>>> </pool>
>>>>>>
>>>>>> 2021-01-03 13:15:32,759 INFO  [kvm.storage.LibvirtStorageAdaptor]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Trying to fetch
>>>>>> storage
>>>>> pool
>>>>>> d9b976cb-bcaf-320a-94e6-b337e65dd4f5 from libvirt
>>>>>> 2021-01-03 13:15:32,760 DEBUG [kvm.resource.LibvirtConnection]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Looking for libvirtd
>>>>>> connection at: qemu:///system
>>>>>> 2021-01-03 13:15:32,769 WARN  [cloud.agent.Agent]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Caught:
>>>>>> java.lang.NumberFormatException: For input string: ""
>>>>>>         at
>>>>>>
>>>>> java.base/java.lang.NumberFormatException.forInputString(NumberFormat
>>>>> E
>>>>> xception.java:65)
>>>>>>         at java.base/java.lang.Integer.parseInt(Integer.java:662)
>>>>>>         at java.base/java.lang.Integer.parseInt(Integer.java:770)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtStoragePoolXMLParser.parseSt
>>>>> o
>>>>> ragePoolXML(LibvirtStoragePoolXMLParser.java:58)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool
>>>>> D
>>>>> ef(LibvirtStorageAdaptor.java:413)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool
>>>>> (
>>>>> LibvirtStorageAdaptor.java:439)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool
>>>>> (
>>>>> LibvirtStorageAdaptor.java:424)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStorageP
>>>>> o
>>>>> ol(LibvirtStorageAdaptor.java:654)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStorageP
>>>>> o
>>>>> ol(KVMStoragePoolManager.java:329)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStorageP
>>>>> o
>>>>> ol(KVMStoragePoolManager.java:323)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCom
>>>>> m
>>>>> andWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCom
>>>>> m
>>>>> andWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execu
>>>>> t
>>>>> e(LibvirtRequestWrapper.java:78)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeReq
>>>>> u
>>>>> est(LibvirtComputingResource.java:1643)
>>>>>>         at com.cloud.agent.Agent.processRequest(Agent.java:661)
>>>>>>         at
>>>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
>>>>>>         at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>>>         at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>>>         at
>>>>>> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>>>>>>         at
>>>>>>
>>>>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoo
>>>>> l
>>>>> Executor.java:1128)
>>>>>>         at
>>>>>>
>>>>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPo
>>>>> o
>>>>> lExecutor.java:628)
>>>>>>         at java.base/java.lang.Thread.run(Thread.java:834)
>>>>>> 2021-01-03 13:15:32,778 DEBUG [cloud.agent.Agent]
>>>>>> (agentRequest-Handler-3:null) (logid:0fd66f6e) Seq
>> 1-2968153629413867530:
>>>>>> { Ans: , MgmtId: 182719176, via: 1, Ver: v1, Flags: 10,
>>>>>>
>>>>>
>> [{"com.cloud.agent.api.Answer":{"result":"false","details":"java.lang.NumberFormatException:
>>>>>> For input string: ""
>>>>>>         at
>>>>>>
>>>>> java.base/java.lang.NumberFormatException.forInputString(NumberFormat
>>>>> E
>>>>> xception.java:65)
>>>>>>         at java.base/java.lang.Integer.parseInt(Integer.java:662)
>>>>>>         at java.base/java.lang.Integer.parseInt(Integer.java:770)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtStoragePoolXMLParser.parseSt
>>>>> o
>>>>> ragePoolXML(LibvirtStoragePoolXMLParser.java:58)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool
>>>>> D
>>>>> ef(LibvirtStorageAdaptor.java:413)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool
>>>>> (
>>>>> LibvirtStorageAdaptor.java:439)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.getStoragePool
>>>>> (
>>>>> LibvirtStorageAdaptor.java:424)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStorageP
>>>>> o
>>>>> ol(LibvirtStorageAdaptor.java:654)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStorageP
>>>>> o
>>>>> ol(KVMStoragePoolManager.java:329)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStorageP
>>>>> o
>>>>> ol(KVMStoragePoolManager.java:323)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCom
>>>>> m
>>>>> andWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCom
>>>>> m
>>>>> andWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execu
>>>>> t
>>>>> e(LibvirtRequestWrapper.java:78)
>>>>>>         at
>>>>>>
>>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeReq
>>>>> u
>>>>> est(LibvirtComputingResource.java:1643)
>>>>>>         at com.cloud.agent.Agent.processRequest(Agent.java:661)
>>>>>>         at
>>>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
>>>>>>         at com.cloud.utils.nio.Task.call(Task.java:83)
>>>>>>         at com.cloud.utils.nio.Task.call(Task.java:29)
>>>>>>         at
>>>>>> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>>>>>>         at
>>>>>>
>>>>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoo
>>>>> l
>>>>> Executor.java:1128)
>>>>>>         at
>>>>>>
>>>>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPo
>>>>> o
>>>>> lExecutor.java:628)
>>>>>>         at java.base/java.lang.Thread.run(Thread.java:834)
>>>>>> ","wait":"0"}}] }
>>>>>> 2021-01-03 13:15:43,241 DEBUG
>>>>>> [kvm.resource.LibvirtComputingResource]
>>>>>> (UgentTask-2:null) (logid:) Executing:
>>>>>> /usr/share/cloudstack-common/scripts/vm/network/security_group.py
>>>>>> get_rule_logs_for_vms
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> After the failure, I checked the pool through virsh and found that
>>>>>> it had been added successfully.
>>>>>> Here is the virsh output information:
>>>>>>
>>>>>>
>>>>>> root@noded:/etc/cloudstack/agent# virsh pool-list
>>>>>>  Name                                   State    Autostart
>>>>>> ------------------------------------------------------------
>>>>>>  1739fc06-2a31-4af1-b8cb-871a27989f37   active   no
>>>>>>  d9b976cb-bcaf-320a-94e6-b337e65dd4f5   active   no
>>>>>>  root                                   active   yes
>>>>>>
>>>>>> root@noded:/etc/cloudstack/agent# virsh pool-dumpxml
>>>>>> d9b976cb-bcaf-320a-94e6-b337e65dd4f5
>>>>>> <pool type='rbd'>
>>>>>>   <name>d9b976cb-bcaf-320a-94e6-b337e65dd4f5</name>
>>>>>>   <uuid>d9b976cb-bcaf-320a-94e6-b337e65dd4f5</uuid>
>>>>>>   <capacity unit='bytes'>12122373201920</capacity>
>>>>>>   <allocation unit='bytes'>912457728</allocation>
>>>>>>   <available unit='bytes'>11998204379136</available>
>>>>>>   <source>
>>>>>>     <host name='10.100.250.14'/>
>>>>>>     <name>rbd</name>
>>>>>>     <auth type='ceph' username='hyperx'>
>>>>>>       <secret uuid='d9b976cb-bcaf-320a-94e6-b337e65dd4f5'/>
>>>>>>     </auth>
>>>>>>   </source>
>>>>>> </pool>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> -----邮件原件-----
>>>>>> 发件人: Daan Hoogland <daan.hoogl...@gmail.com>
>>>>>> 发送时间: 2021年1月1日 16:55
>>>>>> 收件人: users <us...@cloudstack.apache.org>
>>>>>> 抄送: dev <dev@cloudstack.apache.org>
>>>>>> 主题: Re: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>>>>
>>>>>> Happy New Year to all, I think we have a release but I'll wait to
>>>>>> tally votes until Monday. Enjoy your weekend and the coming year.
>>>>>>
>>>>>> On Thu, 31 Dec 2020, 15:10 Boris Stoyanov,
>>>>>> <boris.stoya...@shapeblue.com
>>>>>>
>>>>>> wrote:
>>>>>>
>>>>>>> +1 (binding)
>>>>>>>
>>>>>>> In shared effort with Vladimir Petrov, we've done upgrade testing
>>>>>>> from latest of:
>>>>>>> 4.11
>>>>>>> 4.13
>>>>>>> 4.14
>>>>>>>
>>>>>>> Also did a basic lifecycle operations of:
>>>>>>> VMs, Networks, Storage, Infra(pod, cluster, zone, hosts).
>>>>>>>
>>>>>>> And we couldn't find any stopping issues with this RC.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Bobby.
>>>>>>>
>>>>>>> On 24.12.20, 5:14, "Rohit Yadav" <rohit.ya...@shapeblue.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>     All,
>>>>>>>
>>>>>>>     Here are the convenience packages build from 4.15.0.0-RC3 if
>>>>>>> you don't want to build CloudStack from the source artifacts:
>>>>>>>
>>>>>>>     Packages: (Debian, CentOS7, and CentOS8)
>>>>>>>     http://download.cloudstack.org/testing/4.15.0.0-rc3/
>>>>>>>
>>>>>>>     4.15 systemvmtemplate:
>>>>>>>     http://download.cloudstack.org/systemvm/4.15/
>>>>>>>
>>>>>>>     Build from the master branch of
>>>>>>> https://github.com/apache/cloudstack-documentation (if/after voting
>>>>>>> passes, we'll update and publish the docs):
>>>>>>>     http://docs.cloudstack.apache.org/en/master/upgrading/
>>>>>>>
>>>>>>>     Additional notes:
>>>>>>>       *   The new UI is bundled within the cloudstack-management
>>>>> package
>>>>>>> and is shipped as the default UI served at <host:8080>/client, old
>>>>>>> UI will be served via <host:8080>/client/legacy. Most users don't
>>>>>>> need to do any separate installation or perform an installation step.
>>>>>>>       *   We've added support for CentOS8 with 4.15 but CentOS8 will
>>>>> EOL
>>>>>>> in Dec 2021 (https://wiki.centos.org/About/Product).
>>>>>>>
>>>>>>>
>>>>>>>     Regards.
>>>>>>>
>>>>>>>     ________________________________
>>>>>>>     From: Daan Hoogland <daan.hoogl...@gmail.com>
>>>>>>>     Sent: Wednesday, December 23, 2020 23:13
>>>>>>>     To: users <us...@cloudstack.apache.org>; dev <
>>>>>>> dev@cloudstack.apache.org>
>>>>>>>     Subject: [VOTE] Apache Cloudstack 4.15.0.0 and UI [RC3]
>>>>>>>
>>>>>>>     LS,
>>>>>>>     After fixing another few blockers, we have an RC3, The changes
>>>>>>> (other than
>>>>>>>     bundling) are mostly interesting for those working with
>>>>>>> templates and on
>>>>>>>     vmware.
>>>>>>>
>>>>>>>     We are voting for the new UI and the main code.
>>>>>>>
>>>>>>>     The candidate release branch is 4.15.0.0-RC20201223T1632. The
>>>>>>> UI is still
>>>>>>>     separate but as agreed upon before this will be merged in
>>>>>>> coming releases,
>>>>>>>     at least from a version management point of view.
>>>>>>>     I've created a 4.15.0.0 release candidate, with the following
>>>>>>> artifacts up
>>>>>>>     for a vote:Git Branches:
>>>>>>>     main code:
>>>>>>>
>>>>>>>
>>>>>>
>>>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h
>>>>> =
>>>>> refs/heads/4.15.0.0-RC20201223T1632
>>>>>>>     <
>>>>>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlo
>>>>>>> g;h=
>>>>>>> refs/heads/4.15.0.0-RC20201214T1124
>>>>>>>>
>>>>>>>     ui code:
>>>>>>>     <
>>>>>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack-primate.git;a
>>>>>>> =sho
>>>>>>> rtlog;h=refs/tags/1.0
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>> https://git-wip-us.apache.org/repos/asf?p=cloudstack-primate.git;a=ta
>>>>> g
>>>>> ;h=refs/tags/1.0
>>>>>>>     and Commit SH:
>>>>>>>     main code: 01b3e361c7bb81fd1ea822faddd6594e52bb00c1
>>>>>>>     ui code: 0593302dd53ac3203d3ab43b62d890605910f3e1
>>>>>>>
>>>>>>>     Source release (checksums and signatures are available at the
>> same
>>>>>>>     location):
>>>>>>>     https://dist.apache.org/repos/dist/dev/cloudstack/4.15.0.0/
>> (rev.
>>>>>>> 45059)
>>>>>>>     PGP release keys (signed using 7975062401944786):
>>>>>>>     https://dist.apache.org/repos/dist/release/cloudstack/KEYSVote
>>>>>>> will be open
>>>>>>>     for (at least) 72 hours.For sanity in tallying the vote, can
>>>>>>> PMC members
>>>>>>>     please be sure to indicate "(binding)" with their vote?[ ] +1
>>>>> approve
>>>>>>>     [ ] +0 no opinion
>>>>>>>     [ ] -1 disapprove (and reason why)
>>>>>>>
>>>>>>>     I will work with community members to provide convenience
>>>>>>> packaging over
>>>>>>>     the next few days.
>>>>>>>     The documentation repo will be updated as we move along.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>     --
>>>>>>>     Daan
>>>>>>>
>>>>>>>     rohit.ya...@shapeblue.com
>>>>>>>     www.shapeblue.com<http://www.shapeblue.com>
>>>>>>>     3 London Bridge Street,  3rd floor, News Building, London  SE1
>>>>> 9SGUK
>>>>>>>     @shapeblue
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> boris.stoya...@shapeblue.com
>>>>>>> www.shapeblue.com<http://www.shapeblue.com>
>>>>>>> 3 London Bridge Street,  3rd floor, News Building, London  SE1
>>>>>>> 9SGUK @shapeblue
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Daan
>>>>>
>>>>>
>>>>
>>>> --
>>>> Daan
>>>>
>>
> 
> 

Reply via email to