[ovirt-users] Re: hosted engine migration

2020-09-21 Thread Strahil Nikolov via Users
So, let's summarize:

- Cannot migrate the HE due to "CPU policy".
- HE's CPU is westmere - just like hosts
- You have enough resources on the second HE host (both CPU + MEMORY)

What is the Cluster's CPU type (you can check in UI) ?

Maybe you should enable debugging on various locations to identify the issue.

Anything interesting in the libvirt's log for the HostedEngine.xml on the 
destination host ?


Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 05:37:18 Гринуич+3, ddqlo  
написа: 





Yes. I can. The host which does not host the HE could be reinstalled 
sucessfully in web UI. After this is done nothing has changed.






在 2020-09-22 03:08:18,"Strahil Nikolov"  写道:
>Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" 
>and enable the HE deployment from one of the tabs ?
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>
>В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>so strange! After I set global maintenance, powered off and started H The cpu 
>of HE became 'Westmere'(did not change anything). But HE still could not be 
>migrated.
>
>HE xml:
>  
>    Westmere
>    
>    
>    
>    
>    
>    
>    
>      
>    
>  
>
>host capabilities: 
>Westmere
>
>cluster cpu type (UI): 
>
>
>host cpu type (UI):
>
>
>HE cpu type (UI):
>
>
>
>
>
>
>
>在 2020-09-19 13:27:35,"Strahil Nikolov"  写道:
>>Hm... interesting.
>>
>>The VM is using 'Haswell-noTSX'  while the host is 'Westmere'.
>>
>>In my case I got no difference:
>>
>>[root@ovirt1 ~]# virsh  dumpxml HostedEngine | grep Opteron
>>   Opteron_G5
>>[root@ovirt1 ~]# virsh capabilities | grep Opteron
>> Opteron_G5
>>
>>Did you update the cluster holding the Hosted Engine ?
>>
>>
>>I guess you can try to:
>>
>>- Set global maintenance
>>- Power off the HostedEngine VM
>>- virsh dumpxml HostedEngine > /root/HE.xml
>>- use virsh edit to change the cpu of the HE (non-permanent) change
>>- try to power on the modified HE
>>
>>If it powers on , you can try to migrate it and if it succeeds - then you 
>>should make it permanent.
>>
>>
>>
>>
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>HE:
>>
>>
>>  HostedEngine
>>  b4e805ff-556d-42bd-a6df-02f5902fd01c
>>  http://ovirt.org/vm/tune/1.0"; 
>>xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>
>>    
>>    http://ovirt.org/vm/1.0";>
>>    4.3
>>    False
>>    false
>>    1024
>>    >type="int">1024
>>    auto_resume
>>    1600307555.19
>>    
>>        external
>>        
>>            4
>>        
>>    
>>    
>>        ovirtmgmt
>>        
>>            4
>>        
>>    
>>    
>>        
>>c17c1934-332f-464c-8f89-ad72463c00b3
>>        /dev/vda2
>>        
>>8eca143a-4535-4421-bd35-9f5764d67d70
>>        
>>----
>>        exclusive
>>        
>>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>        
>>            1
>>        
>>        
>>            
>>                
>>c17c1934-332f-464c-8f89-ad72463c00b3
>>                
>>8eca143a-4535-4421-bd35-9f5764d67d70
>>                >type="int">108003328
>>                
>>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases
>>                
>>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>                
>>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>            
>>        
>>    
>>    
>>
>>  
>>  67108864
>>  16777216
>>  16777216
>>  64
>>  1
>>  
>>    /machine
>>  
>>  
>>    
>>      oVirt
>>      oVirt Node
>>      7-5.1804.el7.centos
>>      ----0CC47A6B3160
>>      b4e805ff-556d-42bd-a6df-02f5902fd01c
>>    
>>  
>>  
>>    hvm
>>    
>>    
>>    
>>  
>>  
>>    
>>  
>>  
>>    Haswell-noTSX
>>    
>>    
>>    
>>    
>>    
>>    
>>    
>>    
>>    
>>      
>>    
>>  
>>  
>>    
>>    
>>    
>>  
>>  destroy
>>  destroy
>>  destroy
>>  
>>    
>>    
>>  
>>  
>>    /usr/libexec/qemu-kvm
>>    
>>      
>>      
>>      
>>      
>>      
>>      
>>    
>>    
>>      >io='native' iothread='1'/>
>>      >dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'>
>>        
>>      
>>      
>>      
>>      8eca143a-4535-4421-bd35-9f5764d67d70
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      >function='0x1'/>
>>    
>>    
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      >function='0x2'/>
>>    
>>    
>>      
>>    
>>    
>>      c17c1934-332f-464c-8f89-ad72463c00b3
>>      ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>      >offset='108003328'/>
>>    
>>    
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      >function='0x0'/>
>>    
>>    
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      
>>      >function='0x0

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Strahil Nikolov via Users
Have you restarted glusterd.service on the affected node.
glusterd is just management layer and it won't affect the brick processes.

Best Regards,
Strahil Nikolov






В вторник, 22 септември 2020 г., 01:43:36 Гринуич+3, Jeremey Wise 
 написа: 






Start is not an option.

It notes two bricks.  but command line denotes three bricks and all present

[root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       33123
Brick odinst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data                            49152     0          Y       2646
Self-heal Daemon on localhost               N/A       N/A        Y       3004
Self-heal Daemon on thorst.penguinpages.loc
al                                          N/A       N/A        Y       33230
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume data
--
There are no active volume tasks

[root@odin thorst.penguinpages.local:_vmstore]# gluster peer status
Number of Peers: 2

Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@odin thorst.penguinpages.local:_vmstore]#




On Mon, Sep 21, 2020 at 4:32 PM Strahil Nikolov  wrote:
> Just select the volume and press "start" . It will automatically mark "force 
> start" and will fix itself.
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise 
>  написа: 
> 
> 
> 
> 
> 
> 
> oVirt engine shows  one of the gluster servers having an issue.  I did a 
> graceful shutdown of all three nodes over weekend as I have to move around 
> some power connections in prep for UPS.
> 
> Came back up.. but
> 
> 
> 
> And this is reflected in 2 bricks online (should be three for each volume)
> 
> 
> Command line shows gluster should be happy.
> 
> [root@thor engine]# gluster peer status
> Number of Peers: 2
> 
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
> 
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
> 
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data                              49152     0          Y       11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data                              49152     0          Y       2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data                            49152     0          Y       2646
> Self-heal Daemon on localhost               N/A       N/A        Y       50560
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       3004
> Self-heal Daemon on medusast.penguinpages.l
> ocal                                        N/A       N/A        Y       2475
> 
> Task Status of Volume data
> --
> There are no active volume tasks
> 
> Status of volume: engine
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine                          49153     0          Y       11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine                          49153     0          Y       2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine                        49153     0          Y       2657
> Self-heal Daemon on localhost               N/A       N/A        Y       50560
> Self-heal Daemon on odinst.penguinpages.loc
> al                                          N/A       N/A        Y       3004
> Self-heal Daemon on medusast.penguinpages.l
> ocal                                        N/A       N/A        Y       2475
> 
> Task Status of Volume engine
> --
> There are no active volume

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-21 Thread Strahil Nikolov via Users
Interesting is that I don't find anything recent , but this one:
https://devblogs.microsoft.com/oldnewthing/20120511-00/?p=7653

Can you check if anything in the OS was updated/changed recently ?

Also check if the VM is with nested virtualization enabled. 

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 23:56:26 Гринуич+3, Vinícius Ferrão 
 написа: 





Strahil, thank you man. We finally got some output:

2020-09-15T12:34:49.362238Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: CPU 10 [socket-id: 10, core-id: 0, thread-id: 0], CPU 11 [socket-id: 11, 
core-id: 0, thread-id: 0], CPU 12 [socket-id: 12, core-id: 0, thread-id: 0], 
CPU 13 [socket-id: 13, core-id: 0, thread-id: 0], CPU 14 [socket-id: 14, 
core-id: 0, thread-id: 0], CPU 15 [socket-id: 15, core-id: 0, thread-id: 0]
2020-09-15T12:34:49.362265Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config, ability to start up with partial NUMA mappings is 
obsoleted and will be removed in future
KVM: entry failed, hardware error 0x8021

If you're running a guest on an Intel machine without unrestricted mode
support, the failure can be most likely due to the guest entering an invalid
state for Intel VT. For example, the guest maybe running in big real mode
which is not supported on less recent Intel processors.

EAX= EBX=01746180 ECX=4be7c002 EDX=000400b6
ESI=8b3d6080 EDI=02d70400 EBP=a19bbdfe ESP=82883770
EIP=8000 EFL=0002 [---] CPL=0 II=0 A20=1 SMM=1 HLT=0
ES =   00809300
CS =8d00 7ff8d000  00809300
SS =   00809300
DS =   00809300
FS =   00809300
GS =   00809300
LDT=  000f 
TR =0040 04c59000 0067 8b00
GDT=    04c5afb0 0057
IDT=     
CR0=00050032 CR2=c1b7ec48 CR3=001ad002 CR4=
DR0= DR1= DR2= 
DR3= 
DR6=0ff0 DR7=0400
EFER=
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff  ff ff ff 
ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
2020-09-16T04:11:55.344128Z qemu-kvm: terminating on signal 15 from pid 1 
()
2020-09-16 04:12:02.212+: shutting down, reason=shutdown






That’s the issue, I got this on the logs of both physical machines. The 
probability of both machines are damaged is not quite common right? So even 
with the log saying it’s a hardware error it may be software related? And 
again, this only happens with this VM.

> On 21 Sep 2020, at 17:36, Strahil Nikolov  wrote:
> 
> Usually libvirt's log might provide hints (yet , no clues) of any issues.
> 
> For example: 
> /var/log/libvirt/qemu/.log
> 
> Anything changed recently (maybe oVirt version was increased) ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão 
>  написа: 
> 
> 
> 
> 
> 
> Hi Strahil, 
> 
> 
> 
> Both disks are VirtIO-SCSI and are Preallocated:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thanks,
> 
> 
> 
> 
> 
> 
> 
> 
>>  
>> On 21 Sep 2020, at 17:09, Strahil Nikolov  wrote:
>> 
>> 
>>  
>> What type of disks are you using ? Any change you use thin disks ?
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via 
>> Users  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hi, sorry to bump the thread.
>> 
>> But I still with this issue on the VM. This crashes are still happening, and 
>> I really don’t know what to do. Since there’s nothing on logs, except from 
>> that message on `dmesg` of the host machine I started changing setting to 
>> see if anything changes or if I at least I get a pattern.
>> 
>> What I’ve tried:
>> 1. Disabled I/O Threading on VM.
>> 2. Increased I/O Threading to 2 form 1.
>> 3. Disabled Memory Balooning.
>> 4. Reduced VM resources form 10 CPU’s and 48GB of RAM to 6 CPU’s and 24GB of 
>> RAM.
>> 5. Moved the VM to another host.
>> 6. Dedicated a host specific to this VM.
>> 7. Check on the storage system to see if there’s any resource starvation, 
>> but everything seems to be fine.
>> 8. Checked both iSCSI switches to see if there’s something wrong with the 
>> fabrics: 0 errors.
>> 
>> I’m really running out of ideas. The VM was working normally and suddenly 
>> this started.
>> 
>> Thanks,
>> 
>> PS: When I was typing this message it crashed again:
>> 
>> [427483.126725] *** Guest State ***
>> [427483.127661] CR0: actual=0x00050032, shadow=0x00050032, 
>> gh_mask=fff7
>> [427483.128505] CR4: actual=0x2050, shadow=0x, 
>> gh_mask=f871
>> [427483.129342] CR3 = 0x0001849ff002
>> [427483.130177] RSP = 0xb10186b0  RIP = 0x8000
>> [427483.131014] RFLAGS=0x0002        DR7 = 0x0400
>> [427483.131859] Sysent

[ovirt-users] Re: in ovirt-engine all host show the status at "non-responsive"

2020-09-21 Thread momokch--- via Users
when i activate the ovirt-engine cert, i can access to the ovirt-engine webpage


i have checked the log file , i have 4 hosts, all of them are "ERROR 
[org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable 
to process messages: Received fatal alert: certificate_expired"

is it i must shutdown all vm running from my ovirt-engine???

if yes, i must facing a question i cannot check the status of all of vm, some 
vm was down and cannot access anymore. is it i just turn off the host and 
Reinstall' the hosts, or 'Enroll Certificates', right???
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GJDQHQEJNJ5H7KDO6GWOUEFSCFQ3LFAQ/


[ovirt-users] Re: hosted engine migration

2020-09-21 Thread ddqlo
Yes. I can. The host which does not host the HE could be reinstalled 
sucessfully in web UI. After this is done nothing has changed.














在 2020-09-22 03:08:18,"Strahil Nikolov"  写道:
>Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" 
>and enable the HE deployment from one of the tabs ?
>
>Best Regards,
>Strahil Nikolov
>
>
>
>
>
>
>В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>so strange! After I set global maintenance, powered off and started H The cpu 
>of HE became 'Westmere'(did not change anything). But HE still could not be 
>migrated.
>
>HE xml:
>  
>Westmere
>
>
>
>
>
>
>
>  
>
>  
>
>host capabilities: 
>Westmere
>
>cluster cpu type (UI): 
>
>
>host cpu type (UI):
>
>
>HE cpu type (UI):
>
>
>
>
>
>
>
>在 2020-09-19 13:27:35,"Strahil Nikolov"  写道:
>>Hm... interesting.
>>
>>The VM is using 'Haswell-noTSX'  while the host is 'Westmere'.
>>
>>In my case I got no difference:
>>
>>[root@ovirt1 ~]# virsh  dumpxml HostedEngine | grep Opteron
>>   Opteron_G5
>>[root@ovirt1 ~]# virsh capabilities | grep Opteron
>> Opteron_G5
>>
>>Did you update the cluster holding the Hosted Engine ?
>>
>>
>>I guess you can try to:
>>
>>- Set global maintenance
>>- Power off the HostedEngine VM
>>- virsh dumpxml HostedEngine > /root/HE.xml
>>- use virsh edit to change the cpu of the HE (non-permanent) change
>>- try to power on the modified HE
>>
>>If it powers on , you can try to migrate it and if it succeeds - then you 
>>should make it permanent.
>>
>>
>>
>>
>>
>>Best Regards,
>>Strahil Nikolov
>>
>>В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo  
>>написа: 
>>
>>
>>
>>
>>
>>HE:
>>
>>
>>  HostedEngine
>>  b4e805ff-556d-42bd-a6df-02f5902fd01c
>>  http://ovirt.org/vm/tune/1.0"; 
>> xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>
>>
>>http://ovirt.org/vm/1.0";>
>>4.3
>>False
>>false
>>1024
>>> type="int">1024
>>auto_resume
>>1600307555.19
>>
>>external
>>
>>4
>>
>>
>>
>>ovirtmgmt
>>
>>4
>>
>>
>>
>>
>> c17c1934-332f-464c-8f89-ad72463c00b3
>>/dev/vda2
>>
>> 8eca143a-4535-4421-bd35-9f5764d67d70
>>
>> ----
>>exclusive
>>
>> ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>
>>1
>>
>>
>>
>>
>> c17c1934-332f-464c-8f89-ad72463c00b3
>>
>> 8eca143a-4535-4421-bd35-9f5764d67d70
>>> type="int">108003328
>>
>> /dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases
>>
>> /rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>
>> ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>
>>
>>
>>
>>
>>  
>>  67108864
>>  16777216
>>  16777216
>>  64
>>  1
>>  
>>/machine
>>  
>>  
>>
>>  oVirt
>>  oVirt Node
>>  7-5.1804.el7.centos
>>  ----0CC47A6B3160
>>  b4e805ff-556d-42bd-a6df-02f5902fd01c
>>
>>  
>>  
>>hvm
>>
>>
>>
>>  
>>  
>>
>>  
>>  
>>Haswell-noTSX
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>  
>>
>>  
>>  
>>
>>
>>
>>  
>>  destroy
>>  destroy
>>  destroy
>>  
>>
>>
>>  
>>  
>>/usr/libexec/qemu-kvm
>>
>>  
>>  
>>  
>>  
>>  
>>  
>>
>>
>>  > io='native' iothread='1'/>
>>  > dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'>
>>
>>  
>>  
>>  
>>  8eca143a-4535-4421-bd35-9f5764d67d70
>>  
>>  > function='0x0'/>
>>
>>
>>  
>>  
>>  > function='0x0'/>
>>
>>
>>  
>>  > function='0x1'/>
>>
>>
>>  
>>  > function='0x0'/>
>>
>>
>>  
>>  > function='0x2'/>
>>
>>
>>  
>>
>>
>>  c17c1934-332f-464c-8f89-ad72463c00b3
>>  ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>>  > offset='108003328'/>
>>
>>
>>  
>>  
>>  
>>  
>>  
>>  
>>  
>>  
>>  
>>  > function='0x0'/>
>>
>>
>>  
>>  
>>  
>>  
>>  
>>  
>>  
>>  
>>  
>>  > function='0x0'/>
>>
>>
>>  > path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/>
>>  
>>
>>  
>>  
>>
>>
>>  > path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/>
>>  
>>  
>>
>>
>>  > path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/>
>>  
>>  
>>  
>>
>>
>>  > path='/var/lib/libvirt/qemu/channels/b4e805

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Jeremey Wise
Well.. to know how to do it with Curl is helpful.. but I think I did

[root@odin ~]#  curl -s -k --user admin@internal:blahblah
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep
''
data
hosted_storage
ovirt-image-repository

What I guess I did is translated that field --sd-name my-storage-domain \
  to " volume" name... My question is .. where do those fields come from?
And which would you typically place all your VMs into?
[image: image.png]



I just took a guess..  and figured "data" sounded like a good place to
stick raw images to build into VM...

[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 21474836480
Disk initial size: 11574706176
Disk name: ns02.qcow2
Disk backup: False
Connecting...
Creating disk...
Disk ID: 9ccb26cf-dd4a-4c9a-830c-ee084074d7a1
Creating image transfer...
Transfer ID: 3a382f0b-1e7d-4397-ab16-4def0e9fe890
Transfer host name: medusa
Uploading image...
[ 100.00% ] 20.00 GiB, 249.86 seconds, 81.97 MiB/s
Finalizing image transfer...
Upload completed successfully
[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_v^C
[root@medusa thorst.penguinpages.local:_vmstore]# ls
example.log  f118dcae-6162-4e9a-89e4-f30ffcfb9ccf  ns02_20200910.tgz
 ns02.qcow2  ns02_var.qcow2
[root@medusa thorst.penguinpages.local:_vmstore]# python3
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
--engine-url https://ovirte01.penguinpages.local/ --username admin@internal
--password-file
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirt.password
--cafile
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/.ovirte01_pki-resource.cer
--sd-name data --disk-sparse
/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_vmstore/ns02_var.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 107374182400
Disk initial size: 107390828544
Disk name: ns02_var.qcow2
Disk backup: False
Connecting...
Creating disk...
Disk ID: 26def4e7-1153-417c-88c1-fd3dfe2b0fb9
Creating image transfer...
Transfer ID: 41518eac-8881-453e-acc0-45391fd23bc7
Transfer host name: medusa
Uploading image...
[  16.50% ] 16.50 GiB, 556.42 seconds, 30.37 MiB/s

Now with those ID numbers and that it kept its name (very helpful)... I am
able to re-constitute the VM
[image: image.png]

VM boots fine.  Fixing VLANs and manual macs on vNICs.. but this process
worked fine.

Thanks for input.   Would be nice to have a GUI "upload" via http into
system :)







On Mon, Sep 21, 2020 at 2:19 PM Nir Soffer  wrote:

> On Mon, Sep 21, 2020 at 8:37 PM penguin pages 
> wrote:
> >
> >
> > I pasted old / file path not right example above.. But here is a cleaner
> version with error i am trying to root cause
> >
> > [root@odin vmstore]# python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
> --engine-url https://ovirte01.penguinpages.local/ --username
> admin@internal --password-file
> /gluster_bricks/vmstore/vmstore/.ovirt.password --cafile
> /gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name
> vmstore --disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
> > Checking image...
> > Image format: qcow2
> > Disk format: cow
> > Disk content type: data
> > Disk provisioned size: 21474836480
> > Disk initial size: 431751168
> > Disk name: ns01.qcow2
> > Disk backup: False
> > Connecting...
> > Creating disk...
> > Traceback (most recent call last):
> >   File
> "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line
> 262, in 
> > name=args.sd_name
> >   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line
> 7697, in add
> > return self._internal_add(disk, headers, query, wait)
> >   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
> 232, in _internal_add
> > return future.wait() if wait else future
> >   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line
> 55, in wait
> > return self._code(resp

[ovirt-users] Re: Upgrading self-Hosted engine from 4.3 to oVirt 4.4

2020-09-21 Thread Adam Xu


在 2020/9/21 14:09, Yedidyah Bar David 写道:

On Fri, Sep 18, 2020 at 3:50 AM Adam Xu  wrote:


在 2020/9/17 17:42, Yedidyah Bar David 写道:

On Thu, Sep 17, 2020 at 11:57 AM Adam Xu  wrote:

在 2020/9/17 16:38, Yedidyah Bar David 写道:

On Thu, Sep 17, 2020 at 11:29 AM Adam Xu  wrote:

在 2020/9/17 15:07, Yedidyah Bar David 写道:

On Thu, Sep 17, 2020 at 8:16 AM Adam Xu  wrote:

在 2020/9/16 15:53, Yedidyah Bar David 写道:

On Wed, Sep 16, 2020 at 10:46 AM Adam Xu  wrote:

在 2020/9/16 15:12, Yedidyah Bar David 写道:

On Wed, Sep 16, 2020 at 6:10 AM Adam Xu  wrote:

Hi ovirt

I just try to upgrade a self-Hosted engine from 4.3.10 to 4.4.1.4.  I followed 
the step in the document:

https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3

the old 4.3 env has a FC storage as engine storage domain and I have created a 
new FC storage vv for the new storage domain to be used in the next steps.

I backup the old 4.3 env and prepare a total new host to restore the env.

in charter 4.4 step 8, it said:

"During the deployment you need to provide a new storage domain. The deployment 
script renames the 4.3 storage domain and retains its data."

it does rename the old storage domain. but it didn't let me choose a new 
storage domain during the deployment. So the new enigne just deployed in the 
new host's local storage and can not move to the FC storage domain.

Can anyone tell me what the problem is?

What do you mean in "deployed in the new host's local storage"?

Did deploy finish successfully?

I think it was not finished yet.

You did 'hosted-engine --deploy --restore-from-file=something', right?

Did this finish?

not finished yet.

What are the last few lines of the output?

[ INFO  ] You can now connect to
https://ovirt6.ntbaobei.com:6900/ovirt-engine/ and check the status of
this host and eventually remediate it, please continue only when the
host is listed as 'up'

[ INFO  ] TASK [ovirt.hosted_engine_setup : include_tasks]

[ INFO  ] ok: [localhost]
[ INFO  ] TASK [ovirt.hosted_engine_setup : Create temporary lock file]
[ INFO  ] changed: [localhost]

[ INFO  ] TASK [ovirt.hosted_engine_setup : Pause execution until
/tmp/ansible.g2opa_y6_he_setup_lock is removed, delete it once ready to
proceed]

Great. This means that you replied 'Yes' to 'Pause the execution
after adding this host to the engine?', and it's now waiting.


but the new host which run the self-hosted engine's status is
"NonOperational" and never will be "up"

You seem to to imply that you expected it to become "up" by itself,
and that you claim that this will never happen, in which you are
correct.

But that's not the intention. The message you got is:

You will be able to iteratively connect to the restored engine in
order to manually review and remediate its configuration before
proceeding with the deployment:
please ensure that all the datacenter hosts and storage domain are
listed as up or in maintenance mode before proceeding.
This is normally not required when restoring an up to date and
coherent backup.

This means that it's up to you to handle this nonoperational host,
and that you are requested to continue (by removing that file) only
then.

So now, let's try to understand why the host is nonoperational, and
try to fix that. Ok?

You should be able to find the current (private/local) IP address of
the engine vm by searching the hosted-engine setup logs for 'local_vm_ip'.
You can ssh (and scp etc.) there from the host, using user 'root' and
the password you supplied.

Please check/share all of /var/log/ovirt-engine on the engine vm.
In particular, please check host-deploy/* logs there. The last lines
show a summary, like:

HOSTNAME : ok=97   changed=34   unreachable=0failed=0
skipped=46   rescued=0ignored=1

my log here is:

2020-09-17 12:19:40 CST - TASK [Executing post tasks defined by user]

2020-09-17 12:19:40 CST - PLAY RECAP
*
ovirt2.ntbaobei.com: ok=99   changed=45   unreachable=0
failed=0skipped=45   rescued=0ignored=1

Good.


Is 'failed' higher than 0? If so, please find the failed task and
check/share the relevant error (or just the entire file).

Also, please check engine.log there for any ' ERROR '.

I collected some error log in engine.log

Only those below?


2020-09-17 12:14:35,084+08 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-83)
[4a6cf221] Command 'UploadStreamVDSCommand(HostName =
ovirt6.ntbaobei.com,
UploadStreamVDSCommandParameters:{hostId='784eada4-49e3-4d6c-95cd-f7c81337c2f7'})'
execution failed: java.net.SocketException: Connection reset

This, and similar ones, are expected - the engine is still on the
private network, so it can't access the other hosts.


...

2020-09-17 12:14:35,085+08 ERROR
[org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
(E

[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Jeremey Wise
Agree about an NVMe Card being put under mpath control.

I have not even gotten to that volume / issue.   My guess is something
weird in CentOS / 4.18.0-193.19.1.el8_2.x86_64  kernel with NVMe block
devices.

I will post once I cross bridge of getting standard SSD volumes working

On Mon, Sep 21, 2020 at 4:12 PM Strahil Nikolov 
wrote:

> Why is your NVME under multipath ? That doesn't make sense at all .
> I have modified my multipath.conf to block all local disks . Also ,don't
> forget the '# VDSM PRIVATE' line somewhere in the top of the file.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
>
>
>
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
>
>
>
> Other server
> vdo: ERROR - Device
> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
> excluded by a filter.
>
>
> All systems when I go to create VDO volume on blank drives.. I get this
> filter error.  All disk outside of the HCI wizard setup are now blocked
> from creating new Gluster volume group.
>
> Here is what I see in /dev/lvm/lvm.conf |grep filter
> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
> filter =
> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|",
> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|",
> "r|.*|"]
>
> [root@odin ~]# ls -al /dev/disk/by-id/
> total 0
> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
> lrwxrwxrwx. 1 root root9 Sep 18 22:40
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root9 Sep 18 14:32
> ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root9 Sep 18 22:40
> ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
> -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT
> -> ../../dm-1
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU
> -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r
> -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9
> -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L
> -> ../../dm-11
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz
> -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35
> dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49
> dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32
> dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32
> lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../nvme0n1
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001-part1
> -> ../../nvme0n1p1
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32
> nvme-SPCC_M.2_PCIe_SSD_AA002458 -> ../../nvme0n1

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Jeremey Wise
Start is not an option.

It notes two bricks.  but command line denotes three bricks and all present

[root@odin thorst.penguinpages.local:_vmstore]# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data  49152 0  Y
33123
Brick odinst.penguinpages.local:/gluster_br
icks/data/data  49152 0  Y
2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data49152 0  Y
2646
Self-heal Daemon on localhost   N/A   N/AY
3004
Self-heal Daemon on thorst.penguinpages.loc
al  N/A   N/AY
33230
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY
2475

Task Status of Volume data
--
There are no active volume tasks

[root@odin thorst.penguinpages.local:_vmstore]# gluster peer status
Number of Peers: 2

Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@odin thorst.penguinpages.local:_vmstore]#




On Mon, Sep 21, 2020 at 4:32 PM Strahil Nikolov 
wrote:

> Just select the volume and press "start" . It will automatically mark
> "force start" and will fix itself.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise <
> jeremey.w...@gmail.com> написа:
>
>
>
>
>
>
> oVirt engine shows  one of the gluster servers having an issue.  I did a
> graceful shutdown of all three nodes over weekend as I have to move around
> some power connections in prep for UPS.
>
> Came back up.. but
>
>
>
> And this is reflected in 2 bricks online (should be three for each volume)
>
>
> Command line shows gluster should be happy.
>
> [root@thor engine]# gluster peer status
> Number of Peers: 2
>
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
>
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
>
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data49152 0  Y
> 2646
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume data
>
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine49153 0  Y
> 2657
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume engine
>
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 151426
> Brick o

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-21 Thread Vinícius Ferrão via Users
Strahil, thank you man. We finally got some output:

2020-09-15T12:34:49.362238Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: CPU 10 [socket-id: 10, core-id: 0, thread-id: 0], CPU 11 [socket-id: 11, 
core-id: 0, thread-id: 0], CPU 12 [socket-id: 12, core-id: 0, thread-id: 0], 
CPU 13 [socket-id: 13, core-id: 0, thread-id: 0], CPU 14 [socket-id: 14, 
core-id: 0, thread-id: 0], CPU 15 [socket-id: 15, core-id: 0, thread-id: 0]
2020-09-15T12:34:49.362265Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config, ability to start up with partial NUMA mappings is 
obsoleted and will be removed in future
KVM: entry failed, hardware error 0x8021

If you're running a guest on an Intel machine without unrestricted mode
support, the failure can be most likely due to the guest entering an invalid
state for Intel VT. For example, the guest maybe running in big real mode
which is not supported on less recent Intel processors.

EAX= EBX=01746180 ECX=4be7c002 EDX=000400b6
ESI=8b3d6080 EDI=02d70400 EBP=a19bbdfe ESP=82883770
EIP=8000 EFL=0002 [---] CPL=0 II=0 A20=1 SMM=1 HLT=0
ES =   00809300
CS =8d00 7ff8d000  00809300
SS =   00809300
DS =   00809300
FS =   00809300
GS =   00809300
LDT=  000f 
TR =0040 04c59000 0067 8b00
GDT= 04c5afb0 0057
IDT=  
CR0=00050032 CR2=c1b7ec48 CR3=001ad002 CR4=
DR0= DR1= DR2= 
DR3= 
DR6=0ff0 DR7=0400
EFER=
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff  ff ff ff 
ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
2020-09-16T04:11:55.344128Z qemu-kvm: terminating on signal 15 from pid 1 
()
2020-09-16 04:12:02.212+: shutting down, reason=shutdown






That’s the issue, I got this on the logs of both physical machines. The 
probability of both machines are damaged is not quite common right? So even 
with the log saying it’s a hardware error it may be software related? And 
again, this only happens with this VM.

> On 21 Sep 2020, at 17:36, Strahil Nikolov  wrote:
> 
> Usually libvirt's log might provide hints (yet , no clues) of any issues.
> 
> For example: 
> /var/log/libvirt/qemu/.log
> 
> Anything changed recently (maybe oVirt version was increased) ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão 
>  написа: 
> 
> 
> 
> 
> 
> Hi Strahil, 
> 
> 
> 
> Both disks are VirtIO-SCSI and are Preallocated:
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Thanks,
> 
> 
> 
> 
> 
> 
> 
> 
>>   
>> On 21 Sep 2020, at 17:09, Strahil Nikolov  wrote:
>> 
>> 
>>   
>> What type of disks are you using ? Any change you use thin disks ?
>> 
>> Best Regards,
>> Strahil Nikolov
>> 
>> 
>> 
>> 
>> 
>> 
>> В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via 
>> Users  написа: 
>> 
>> 
>> 
>> 
>> 
>> Hi, sorry to bump the thread.
>> 
>> But I still with this issue on the VM. This crashes are still happening, and 
>> I really don’t know what to do. Since there’s nothing on logs, except from 
>> that message on `dmesg` of the host machine I started changing setting to 
>> see if anything changes or if I at least I get a pattern.
>> 
>> What I’ve tried:
>> 1. Disabled I/O Threading on VM.
>> 2. Increased I/O Threading to 2 form 1.
>> 3. Disabled Memory Balooning.
>> 4. Reduced VM resources form 10 CPU’s and 48GB of RAM to 6 CPU’s and 24GB of 
>> RAM.
>> 5. Moved the VM to another host.
>> 6. Dedicated a host specific to this VM.
>> 7. Check on the storage system to see if there’s any resource starvation, 
>> but everything seems to be fine.
>> 8. Checked both iSCSI switches to see if there’s something wrong with the 
>> fabrics: 0 errors.
>> 
>> I’m really running out of ideas. The VM was working normally and suddenly 
>> this started.
>> 
>> Thanks,
>> 
>> PS: When I was typing this message it crashed again:
>> 
>> [427483.126725] *** Guest State ***
>> [427483.127661] CR0: actual=0x00050032, shadow=0x00050032, 
>> gh_mask=fff7
>> [427483.128505] CR4: actual=0x2050, shadow=0x, 
>> gh_mask=f871
>> [427483.129342] CR3 = 0x0001849ff002
>> [427483.130177] RSP = 0xb10186b0  RIP = 0x8000
>> [427483.131014] RFLAGS=0x0002DR7 = 0x0400
>> [427483.131859] Sysenter RSP= CS:RIP=:
>> [427483.132708] CS:  sel=0x9b00, attr=0x08093, limit=0x, 
>> base=0x7ff9b000
>> [427483.133559] DS:  sel=0x, attr=0x08093, limit=0x, 
>> base=0x
>> [427483.134413] SS:  sel=0x, attr=0x08093, limit=0x, 
>> base=0x
>> [427483.135237] ES:  sel=0x, at

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-21 Thread Strahil Nikolov via Users
Usually libvirt's log might provide hints (yet , no clues) of any issues.

For example: 
/var/log/libvirt/qemu/.log

Anything changed recently (maybe oVirt version was increased) ?

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão 
 написа: 





Hi Strahil, 



Both disks are VirtIO-SCSI and are Preallocated:














Thanks,








>  
> On 21 Sep 2020, at 17:09, Strahil Nikolov  wrote:
> 
> 
>  
> What type of disks are you using ? Any change you use thin disks ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via 
> Users  написа: 
> 
> 
> 
> 
> 
> Hi, sorry to bump the thread.
> 
> But I still with this issue on the VM. This crashes are still happening, and 
> I really don’t know what to do. Since there’s nothing on logs, except from 
> that message on `dmesg` of the host machine I started changing setting to see 
> if anything changes or if I at least I get a pattern.
> 
> What I’ve tried:
> 1. Disabled I/O Threading on VM.
> 2. Increased I/O Threading to 2 form 1.
> 3. Disabled Memory Balooning.
> 4. Reduced VM resources form 10 CPU’s and 48GB of RAM to 6 CPU’s and 24GB of 
> RAM.
> 5. Moved the VM to another host.
> 6. Dedicated a host specific to this VM.
> 7. Check on the storage system to see if there’s any resource starvation, but 
> everything seems to be fine.
> 8. Checked both iSCSI switches to see if there’s something wrong with the 
> fabrics: 0 errors.
> 
> I’m really running out of ideas. The VM was working normally and suddenly 
> this started.
> 
> Thanks,
> 
> PS: When I was typing this message it crashed again:
> 
> [427483.126725] *** Guest State ***
> [427483.127661] CR0: actual=0x00050032, shadow=0x00050032, 
> gh_mask=fff7
> [427483.128505] CR4: actual=0x2050, shadow=0x, 
> gh_mask=f871
> [427483.129342] CR3 = 0x0001849ff002
> [427483.130177] RSP = 0xb10186b0  RIP = 0x8000
> [427483.131014] RFLAGS=0x0002        DR7 = 0x0400
> [427483.131859] Sysenter RSP= CS:RIP=:
> [427483.132708] CS:  sel=0x9b00, attr=0x08093, limit=0x, 
> base=0x7ff9b000
> [427483.133559] DS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.134413] SS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.135237] ES:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.136040] FS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.136842] GS:  sel=0x, attr=0x08093, limit=0x, 
> base=0x
> [427483.137629] GDTR:                          limit=0x0057, 
> base=0xb10186eb4fb0
> [427483.138409] LDTR: sel=0x, attr=0x1, limit=0x000f, 
> base=0x
> [427483.139202] IDTR:                          limit=0x, 
> base=0x
> [427483.139998] TR:  sel=0x0040, attr=0x0008b, limit=0x0067, 
> base=0xb10186eb3000
> [427483.140816] EFER =    0x  PAT = 0x0007010600070106
> [427483.141650] DebugCtl = 0x  DebugExceptions = 
> 0x
> [427483.142503] Interruptibility = 0009  ActivityState = 
> [427483.143353] *** Host State ***
> [427483.144194] RIP = 0xc0c65024  RSP = 0x9253c0b9bc90
> [427483.145043] CS=0010 SS=0018 DS= ES= FS= GS= TR=0040
> [427483.145903] FSBase=7fcc13816700 GSBase=925adf24 
> TRBase=925adf244000
> [427483.146766] GDTBase=925adf24c000 IDTBase=ff528000
> [427483.147630] CR0=80050033 CR3=0010597b6000 CR4=001627e0
> [427483.148498] Sysenter RSP= CS:RIP=0010:8f196cc0
> [427483.149365] EFER = 0x0d01  PAT = 0x0007050600070106
> [427483.150231] *** Control State ***
> [427483.151077] PinBased=003f CPUBased=b6a1edfa SecondaryExec=0ceb
> [427483.151942] EntryControls=d1ff ExitControls=002fefff
> [427483.152800] ExceptionBitmap=00060042 PFECmask= PFECmatch=
> [427483.153661] VMEntry: intr_info= errcode=0006 ilen=
> [427483.154521] VMExit: intr_info= errcode= ilen=0004
> [427483.155376]        reason=8021 qualification=
> [427483.156230] IDTVectoring: info= errcode=
> [427483.157068] TSC Offset = 0xfffccfc261506dd9
> [427483.157905] TPR Threshold = 0x0d
> [427483.158728] EPT pointer = 0x0009b437701e
> [427483.159550] PLE Gap=0080 Window=0008
> [427483.160370] Virtual processor ID = 0x0004
> 
> 
> 
>> On 16 Sep 2020, at 17:11, Vinícius Ferrão  wrote:
>> 
>> Hello,
>> 
>> I’m an Exchange Server VM that’s going down to suspend without possibility 
>> of recovery. I need to click on shutdown and them power on. I can’t find 
>> anything useful

[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Strahil Nikolov via Users
Just select the volume and press "start" . It will automatically mark "force 
start" and will fix itself.

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 20:53:15 Гринуич+3, Jeremey Wise 
 написа: 






oVirt engine shows  one of the gluster servers having an issue.  I did a 
graceful shutdown of all three nodes over weekend as I have to move around some 
power connections in prep for UPS.

Came back up.. but



And this is reflected in 2 bricks online (should be three for each volume)


Command line shows gluster should be happy.

[root@thor engine]# gluster peer status
Number of Peers: 2

Hostname: odinst.penguinpages.local
Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@thor engine]#

# All bricks showing online
[root@thor engine]# gluster volume status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       11001
Brick odinst.penguinpages.local:/gluster_br
icks/data/data                              49152     0          Y       2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data                            49152     0          Y       2646
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume data
--
There are no active volume tasks

Status of volume: engine
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/engine/engine                          49153     0          Y       11012
Brick odinst.penguinpages.local:/gluster_br
icks/engine/engine                          49153     0          Y       2982
Brick medusast.penguinpages.local:/gluster_
bricks/engine/engine                        49153     0          Y       2657
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume engine
--
There are no active volume tasks

Status of volume: iso
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/iso/iso                                49156     49157      Y       151426
Brick odinst.penguinpages.local:/gluster_br
icks/iso/iso                                49156     49157      Y       69225
Brick medusast.penguinpages.local:/gluster_
bricks/iso/iso                              49156     49157      Y       45018
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475

Task Status of Volume iso
--
There are no active volume tasks

Status of volume: vmstore
Gluster process                             TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       11023
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore                        49154     0          Y       2993
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore                      49154     0          Y       2668
Self-heal Daemon on localhost               N/A       N/A        Y       50560
Self-heal Daemon on medusast.penguinpages.l
ocal                                        N/A       N/A        Y       2475
Self-heal Daemon on odinst.penguinpages.loc
al                                          N/A       N/A        Y       3004

Task Status of Volume vmstore
--
There are no active volume tasks

[ovirt-users] Re: Gluster Domain Storage full

2020-09-21 Thread Strahil Nikolov via Users
Usually gluster has a 10% reserver defined in 'cluster.min-free-disk' volume 
option.
You can power off the VM , then set cluster.min-free-disk
to 1% and immediately move any of the VM's disks to another storage domain.

Keep in mind that filling your bricks is bad and if you eat that reserve , the 
only option would be to try to export the VM as OVA and then wipe from current 
storage and import in a bigger storage domain.

Of course it would be more sensible to just expand the gluster volume (either 
scale-up the bricks -> add more disks, or scale-out -> adding more servers with 
disks on them), but I guess that is not an option - right ?

Best Regards,
Strahil Nikolov








В понеделник, 21 септември 2020 г., 15:58:01 Гринуич+3, supo...@logicworks.pt 
 написа: 





Hello,

I'm running oVirt Version 4.3.4.3-1.el7.
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM.
The VM filled all the Domain storage.
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message:

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain

Any idea?

Thanks

-- 

Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XQYUD2DGE4CBMYXEWRKMOYBSEGW4Y2O/


[ovirt-users] Re: Question on "Memory" column/field in Virtual Machines list/table in ovirt GUI

2020-09-21 Thread Strahil Nikolov via Users
For some OS versions , the oVirt's behavior is accurate , but for other 
versions it's not accurate.
I think that it is more accurate to say that oVirt improperly calculates memory 
for SLES 15/openSUSE 15.

I would open a bug at bugzilla.redhat.com .


Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 15:15:42 Гринуич+3, KISHOR K 
 написа: 





Hi,

I think I already checked that.
What I meant (since beginning) was that ovirt is reporting memory usage in GUI 
same way regardless of CentOS or SLES in our case.
My main question is why ovirt is reporting memory usage percentage based on 
"free" memory but not actually based on "available memory", which is basically 
sum of "free" and "buff/cache". 
Buffer/Cache is a temporary memory and that's anyhow gets released for new 
processes and applications. 
That means that, ovirt should actually consider the actual available memory 
left and report usage accordingly in GUI but what we see now is different 
behavior.
I was very worried when I saw the memory usage as 98% and highlighted in red 
for many of VMs in GUI. But, when I checked the actual used memory by VM, it's 
always below 50%.

Could you clarify how can this behavior from ovirt be OS specific?

I hope I explained the issue clearly or let me know if it is still unclear.
Thanks in advance.


/Kishore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HAJ6TT74U33FAFIJTXTYZHVHYKKSWMN7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GZATZSXBJCPY56MCEAHIGZSK7L3LV2IS/


[ovirt-users] Re: Cannot import VM disks from previously detached storage domain

2020-09-21 Thread Strahil Nikolov via Users
Hey Eyal,

it's really irritating that only ISOs can be imported as disks.

I had to:
1. Delete snapshot (but I really wanted to keep it)
2. Detach all disks from existing VM
3. Delete the VM
4. Import the Vm from the data domain
5. Delete the snapshot , so disks from data domain are "in sync" with the 
non-data disks
6. Attach the non-data disks to the VM

If all disks for a VM were on the same storage domain - I didn't have to wipe 
my snapshots.

Should I file a RFE in order to allow disk import for non-ISO disks ?
If I wanted to rebuild the engine and import the sotrage domains I would have 
to import the VM the first time , just to delete it and import it again - so I 
can get my VM disks from the storage...

Best Regards,
Strahil Nikolov





В понеделник, 21 септември 2020 г., 11:47:04 Гринуич+3, Eyal Shenitzky 
 написа: 





Hi Stranhil, 

Maybe those VMs has more disks on different data storage domains?
If so, those VMs will remain on the environment with the disks that are not 
based on the detached storage-domain.

You can try to import the VM as partial, another option is to remove the VM 
that remained in the environment but 
keep the disks so you will be able to import the VM and attach the disks to it.

On Sat, 19 Sep 2020 at 15:49, Strahil Nikolov via Users  wrote:
> Hello All,
> 
> I would like to ask how to proceed further.
> 
> Here is what I have done so far on my ovirt 4.3.10:
> 1. Set in maintenance and detached my Gluster-based storage domain
> 2. Did some maintenance on the gluster
> 3. Reattached and activated my Gluster-based storage domain
> 4. I have imported my ISOs via the Disk Import tab in UI
> 
> Next I tried to import the VM Disks , but they are unavailable in the disk tab
> So I tried to import the VM:
> 1. First try - import with partial -> failed due to MAC conflict
> 2. Second try - import with partial , allow MAC reassignment -> failed as VM 
> id exists -> recommends to remove the original VM
> 3. I tried to detach the VMs disks , so I can delete it - but this is not 
> possible as the Vm already got a snapshot.
> 
> 
> What is the proper way to import my non-OS disks (data domain is slower but 
> has more space which is more suitable for "data") ?
> 
> 
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WTJXOIVDWU6DGVZQQ243VKGWJLPKHR4L/
> 


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4F64EFRW5AHOOIMSB2OOFF4FVWCZ4YV4/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Strahil Nikolov via Users
 Have you tried to upload your qcow2 disks via the UI ?
Maybe you can create a blank VM (same size of disks) and then replacing the 
disk with your qcow2 from KVM (works only of file-based storages like 
Gluster/NFS).

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 09:12:09 Гринуич+3, Jeremey Wise 
 написа: 






I rebuilt my lab environment.   And their are four or five VMs that really 
would help if I did not have to rebuild.

oVirt as I am now finding when it creates infrastructure, sets it out such that 
I cannot just use older  means of placing .qcow2 files in folders and .xml 
files in other folders and they show up on services restarting.

How do I import VMs from files?  

I found this article but implies VM is running: 
https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt.html
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers#Adding_KVM_as_an_External_Provider

I need a way to import a file.  Even if it means temporarily hosting on "KVM on 
one of the hosts to then bring in once it is up.


Thanks
-- 

penguinpages 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LSE4MNEBGODIRPVAQCUNBO2KGCCQTM5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IKBJ755HJ6DAFQKU5TBTARJSKTH4RW3A/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Strahil Nikolov via Users
Why is your NVME under multipath ? That doesn't make sense at all .
I have modified my multipath.conf to block all local disks . Also ,don't forget 
the '# VDSM PRIVATE' line somewhere in the top of the file.

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 09:04:28 Гринуич+3, Jeremey Wise 
 написа: 










vdo: ERROR - Device /dev/sdc excluded by a filter




Other server
vdo: ERROR - Device 
/dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
 excluded by a filter.


All systems when I go to create VDO volume on blank drives.. I get this filter 
error.  All disk outside of the HCI wizard setup are now blocked from creating 
new Gluster volume group.

Here is what I see in /dev/lvm/lvm.conf |grep filter
[root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
filter = 
["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", 
"a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
"r|.*|"]

[root@odin ~]# ls -al /dev/disk/by-id/
total 0
drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
 -> ../../dm-3
lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
 -> ../../dm-4
lrwxrwxrwx. 1 root root   10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT -> 
../../dm-1
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU -> 
../../dm-2
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r -> 
../../dm-0
lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 -> 
../../dm-6
lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L -> 
../../dm-11
lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz -> 
../../dm-12
lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
 -> ../../dm-3
lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
 -> ../../dm-4
lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5
lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5
lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2
lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
 -> ../../nvme0n1
lrwxrwxrwx. 1 root root   15 Sep 18 14:32 
nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001-part1
 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
nvme-SPCC_M.2_PCIe_SSD_AA002458 -> ../../nvme0n1
lrwxrwxrwx. 1 root root   15 Sep 18 14:32 
nvme-SPCC_M.2_PCIe_SSD_AA002458-part1 -> ../../nvme0n1p1
lrwxrwxrwx. 1 root root    9 Sep 18 22:40 
scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda
lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1
lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part2 -> ../../sda2
lrwxrwxrwx. 1 root root    9 Sep 18 14:32 
scsi-0ATA_Micron_1100_MTFD_17401F699137 -> ../

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-21 Thread Strahil Nikolov via Users
What type of disks are you using ? Any change you use thin disks ?

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 07:20:23 Гринуич+3, Vinícius Ferrão via 
Users  написа: 





Hi, sorry to bump the thread.

But I still with this issue on the VM. This crashes are still happening, and I 
really don’t know what to do. Since there’s nothing on logs, except from that 
message on `dmesg` of the host machine I started changing setting to see if 
anything changes or if I at least I get a pattern.

What I’ve tried:
1. Disabled I/O Threading on VM.
2. Increased I/O Threading to 2 form 1.
3. Disabled Memory Balooning.
4. Reduced VM resources form 10 CPU’s and 48GB of RAM to 6 CPU’s and 24GB of 
RAM.
5. Moved the VM to another host.
6. Dedicated a host specific to this VM.
7. Check on the storage system to see if there’s any resource starvation, but 
everything seems to be fine.
8. Checked both iSCSI switches to see if there’s something wrong with the 
fabrics: 0 errors.

I’m really running out of ideas. The VM was working normally and suddenly this 
started.

Thanks,

PS: When I was typing this message it crashed again:

[427483.126725] *** Guest State ***
[427483.127661] CR0: actual=0x00050032, shadow=0x00050032, 
gh_mask=fff7
[427483.128505] CR4: actual=0x2050, shadow=0x, 
gh_mask=f871
[427483.129342] CR3 = 0x0001849ff002
[427483.130177] RSP = 0xb10186b0  RIP = 0x8000
[427483.131014] RFLAGS=0x0002        DR7 = 0x0400
[427483.131859] Sysenter RSP= CS:RIP=:
[427483.132708] CS:  sel=0x9b00, attr=0x08093, limit=0x, 
base=0x7ff9b000
[427483.133559] DS:  sel=0x, attr=0x08093, limit=0x, 
base=0x
[427483.134413] SS:  sel=0x, attr=0x08093, limit=0x, 
base=0x
[427483.135237] ES:  sel=0x, attr=0x08093, limit=0x, 
base=0x
[427483.136040] FS:  sel=0x, attr=0x08093, limit=0x, 
base=0x
[427483.136842] GS:  sel=0x, attr=0x08093, limit=0x, 
base=0x
[427483.137629] GDTR:                          limit=0x0057, 
base=0xb10186eb4fb0
[427483.138409] LDTR: sel=0x, attr=0x1, limit=0x000f, 
base=0x
[427483.139202] IDTR:                          limit=0x, 
base=0x
[427483.139998] TR:  sel=0x0040, attr=0x0008b, limit=0x0067, 
base=0xb10186eb3000
[427483.140816] EFER =    0x  PAT = 0x0007010600070106
[427483.141650] DebugCtl = 0x  DebugExceptions = 
0x
[427483.142503] Interruptibility = 0009  ActivityState = 
[427483.143353] *** Host State ***
[427483.144194] RIP = 0xc0c65024  RSP = 0x9253c0b9bc90
[427483.145043] CS=0010 SS=0018 DS= ES= FS= GS= TR=0040
[427483.145903] FSBase=7fcc13816700 GSBase=925adf24 
TRBase=925adf244000
[427483.146766] GDTBase=925adf24c000 IDTBase=ff528000
[427483.147630] CR0=80050033 CR3=0010597b6000 CR4=001627e0
[427483.148498] Sysenter RSP= CS:RIP=0010:8f196cc0
[427483.149365] EFER = 0x0d01  PAT = 0x0007050600070106
[427483.150231] *** Control State ***
[427483.151077] PinBased=003f CPUBased=b6a1edfa SecondaryExec=0ceb
[427483.151942] EntryControls=d1ff ExitControls=002fefff
[427483.152800] ExceptionBitmap=00060042 PFECmask= PFECmatch=
[427483.153661] VMEntry: intr_info= errcode=0006 ilen=
[427483.154521] VMExit: intr_info= errcode= ilen=0004
[427483.155376]        reason=8021 qualification=
[427483.156230] IDTVectoring: info= errcode=
[427483.157068] TSC Offset = 0xfffccfc261506dd9
[427483.157905] TPR Threshold = 0x0d
[427483.158728] EPT pointer = 0x0009b437701e
[427483.159550] PLE Gap=0080 Window=0008
[427483.160370] Virtual processor ID = 0x0004


> On 16 Sep 2020, at 17:11, Vinícius Ferrão  wrote:
> 
> Hello,
> 
> I’m an Exchange Server VM that’s going down to suspend without possibility of 
> recovery. I need to click on shutdown and them power on. I can’t find 
> anything useful on the logs, except on “dmesg” of the host:
> 
> [47807.747606] *** Guest State ***
> [47807.747633] CR0: actual=0x00050032, shadow=0x00050032, 
> gh_mask=fff7
> [47807.747671] CR4: actual=0x2050, shadow=0x, 
> gh_mask=f871
> [47807.747721] CR3 = 0x001ad002
> [47807.747739] RSP = 0xc20904fa3770  RIP = 0x8000
> [47807.747766] RFLAGS=0x0002        DR7 = 0x0400
> [47807.747792] Sysenter RSP= CS:RIP=:
> [47807.747821] CS:  sel=0x9100, attr=0x08093, limit=0x, 
> base=0x7ff91000
> [47807.747855] DS:  sel=0x, attr=0

[ovirt-users] Re: hosted engine migration

2020-09-21 Thread Strahil Nikolov via Users
Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" and 
enable the HE deployment from one of the tabs ?

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo  
написа: 





so strange! After I set global maintenance, powered off and started H The cpu 
of HE became 'Westmere'(did not change anything). But HE still could not be 
migrated.

HE xml:
  
    Westmere
    
    
    
    
    
    
    
      
    
  

host capabilities: 
Westmere

cluster cpu type (UI): 


host cpu type (UI):


HE cpu type (UI):







在 2020-09-19 13:27:35,"Strahil Nikolov"  写道:
>Hm... interesting.
>
>The VM is using 'Haswell-noTSX'  while the host is 'Westmere'.
>
>In my case I got no difference:
>
>[root@ovirt1 ~]# virsh  dumpxml HostedEngine | grep Opteron
>   Opteron_G5
>[root@ovirt1 ~]# virsh capabilities | grep Opteron
> Opteron_G5
>
>Did you update the cluster holding the Hosted Engine ?
>
>
>I guess you can try to:
>
>- Set global maintenance
>- Power off the HostedEngine VM
>- virsh dumpxml HostedEngine > /root/HE.xml
>- use virsh edit to change the cpu of the HE (non-permanent) change
>- try to power on the modified HE
>
>If it powers on , you can try to migrate it and if it succeeds - then you 
>should make it permanent.
>
>
>
>
>
>Best Regards,
>Strahil Nikolov
>
>В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>HE:
>
>
>  HostedEngine
>  b4e805ff-556d-42bd-a6df-02f5902fd01c
>  http://ovirt.org/vm/tune/1.0"; 
>xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>
>    
>    http://ovirt.org/vm/1.0";>
>    4.3
>    False
>    false
>    1024
>    type="int">1024
>    auto_resume
>    1600307555.19
>    
>        external
>        
>            4
>        
>    
>    
>        ovirtmgmt
>        
>            4
>        
>    
>    
>        
>c17c1934-332f-464c-8f89-ad72463c00b3
>        /dev/vda2
>        
>8eca143a-4535-4421-bd35-9f5764d67d70
>        ----
>        exclusive
>        
>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>        
>            1
>        
>        
>            
>                
>c17c1934-332f-464c-8f89-ad72463c00b3
>                
>8eca143a-4535-4421-bd35-9f5764d67d70
>                type="int">108003328
>                
>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases
>                
>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>                
>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>            
>        
>    
>    
>
>  
>  67108864
>  16777216
>  16777216
>  64
>  1
>  
>    /machine
>  
>  
>    
>      oVirt
>      oVirt Node
>      7-5.1804.el7.centos
>      ----0CC47A6B3160
>      b4e805ff-556d-42bd-a6df-02f5902fd01c
>    
>  
>  
>    hvm
>    
>    
>    
>  
>  
>    
>  
>  
>    Haswell-noTSX
>    
>    
>    
>    
>    
>    
>    
>    
>    
>      
>    
>  
>  
>    
>    
>    
>  
>  destroy
>  destroy
>  destroy
>  
>    
>    
>  
>  
>    /usr/libexec/qemu-kvm
>    
>      
>      
>      
>      
>      
>      
>    
>    
>      io='native' iothread='1'/>
>      dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'>
>        
>      
>      
>      
>      8eca143a-4535-4421-bd35-9f5764d67d70
>      
>      function='0x0'/>
>    
>    
>      
>      
>      function='0x0'/>
>    
>    
>      
>      function='0x1'/>
>    
>    
>      
>      function='0x0'/>
>    
>    
>      
>      function='0x2'/>
>    
>    
>      
>    
>    
>      c17c1934-332f-464c-8f89-ad72463c00b3
>      ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>      offset='108003328'/>
>    
>    
>      
>      
>      
>      
>      
>      
>      
>      
>      
>      function='0x0'/>
>    
>    
>      
>      
>      
>      
>      
>      
>      
>      
>      
>      function='0x0'/>
>    
>    
>      path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/>
>      
>        
>      
>      
>    
>    
>      path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/>
>      
>      
>    
>    
>      path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/>
>      
>      
>      
>    
>    
>      path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/>
>      
>      
>      
>    
>    
>      
>      
>      
>    
>    
>      path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/>
>      state='disconnected'/>
>      
>      
>    
>    
>      
>      
>    
>    
>      
>    
>    
>      
>    
>    keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
>      
>    
>    listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'>
>      
>      
>      
>      

[ovirt-users] Re: hosted engine migration

2020-09-21 Thread Strahil Nikolov via Users
That's quite strange.
Any errors/clues in the Engine's logs ?

Best Regards,
Strahil Nikolov






В понеделник, 21 септември 2020 г., 05:58:35 Гринуич+3, ddqlo  
написа: 





so strange! After I set global maintenance, powered off and started H The cpu 
of HE became 'Westmere'(did not change anything). But HE still could not be 
migrated.

HE xml:
  
    Westmere
    
    
    
    
    
    
    
      
    
  

host capabilities: 
Westmere

cluster cpu type (UI): 


host cpu type (UI):


HE cpu type (UI):







在 2020-09-19 13:27:35,"Strahil Nikolov"  写道:
>Hm... interesting.
>
>The VM is using 'Haswell-noTSX'  while the host is 'Westmere'.
>
>In my case I got no difference:
>
>[root@ovirt1 ~]# virsh  dumpxml HostedEngine | grep Opteron
>   Opteron_G5
>[root@ovirt1 ~]# virsh capabilities | grep Opteron
> Opteron_G5
>
>Did you update the cluster holding the Hosted Engine ?
>
>
>I guess you can try to:
>
>- Set global maintenance
>- Power off the HostedEngine VM
>- virsh dumpxml HostedEngine > /root/HE.xml
>- use virsh edit to change the cpu of the HE (non-permanent) change
>- try to power on the modified HE
>
>If it powers on , you can try to migrate it and if it succeeds - then you 
>should make it permanent.
>
>
>
>
>
>Best Regards,
>Strahil Nikolov
>
>В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo  
>написа: 
>
>
>
>
>
>HE:
>
>
>  HostedEngine
>  b4e805ff-556d-42bd-a6df-02f5902fd01c
>  http://ovirt.org/vm/tune/1.0"; 
>xmlns:ovirt-vm="http://ovirt.org/vm/1.0";>
>    
>    http://ovirt.org/vm/1.0";>
>    4.3
>    False
>    false
>    1024
>    type="int">1024
>    auto_resume
>    1600307555.19
>    
>        external
>        
>            4
>        
>    
>    
>        ovirtmgmt
>        
>            4
>        
>    
>    
>        
>c17c1934-332f-464c-8f89-ad72463c00b3
>        /dev/vda2
>        
>8eca143a-4535-4421-bd35-9f5764d67d70
>        ----
>        exclusive
>        
>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>        
>            1
>        
>        
>            
>                
>c17c1934-332f-464c-8f89-ad72463c00b3
>                
>8eca143a-4535-4421-bd35-9f5764d67d70
>                type="int">108003328
>                
>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases
>                
>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>                
>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>            
>        
>    
>    
>
>  
>  67108864
>  16777216
>  16777216
>  64
>  1
>  
>    /machine
>  
>  
>    
>      oVirt
>      oVirt Node
>      7-5.1804.el7.centos
>      ----0CC47A6B3160
>      b4e805ff-556d-42bd-a6df-02f5902fd01c
>    
>  
>  
>    hvm
>    
>    
>    
>  
>  
>    
>  
>  
>    Haswell-noTSX
>    
>    
>    
>    
>    
>    
>    
>    
>    
>      
>    
>  
>  
>    
>    
>    
>  
>  destroy
>  destroy
>  destroy
>  
>    
>    
>  
>  
>    /usr/libexec/qemu-kvm
>    
>      
>      
>      
>      
>      
>      
>    
>    
>      io='native' iothread='1'/>
>      dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'>
>        
>      
>      
>      
>      8eca143a-4535-4421-bd35-9f5764d67d70
>      
>      function='0x0'/>
>    
>    
>      
>      
>      function='0x0'/>
>    
>    
>      
>      function='0x1'/>
>    
>    
>      
>      function='0x0'/>
>    
>    
>      
>      function='0x2'/>
>    
>    
>      
>    
>    
>      c17c1934-332f-464c-8f89-ad72463c00b3
>      ae961104-c3b3-4a43-9f46-7fa6bdc2ac33
>      offset='108003328'/>
>    
>    
>      
>      
>      
>      
>      
>      
>      
>      
>      
>      function='0x0'/>
>    
>    
>      
>      
>      
>      
>      
>      
>      
>      
>      
>      function='0x0'/>
>    
>    
>      path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/>
>      
>        
>      
>      
>    
>    
>      path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/>
>      
>      
>    
>    
>      path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/>
>      
>      
>      
>    
>    
>      path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/>
>      
>      
>      
>    
>    
>      
>      
>      
>    
>    
>      path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/>
>      state='disconnected'/>
>      
>      
>    
>    
>      
>      
>    
>    
>      
>    
>    
>      
>    
>    keymap='en-us' passwdValidTo='1970-01-01T00:00:01'>
>      
>    
>    listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'>
>      
>      
>      
>      
>      
>      
>      
>      
>      
>    
>    
>      
>      

[ovirt-users] Re: AAA Extension mapping could not be found

2020-09-21 Thread Dominique D
I was using the old config file.

Here's the new one explained here:
https://www.ovirt.org/documentation/administration_guide/

Example 21. Example authentication mapping configuration file
ovirt.engine.extension.name = example-http-mapping
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = 
org.ovirt.engine.extension.aaa.misc
ovirt.engine.extension.binding.jbossmodule.class = 
org.ovirt.engine.extension.aaa.misc.mapping.MappingExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Mapping
config.mapAuthRecord.type = regex
config.mapAuthRecord.regex.mustMatch = true
config.mapAuthRecord.regex.pattern = 
^(?.*?)(((?@)(?.*?)@.*)|(?@.*))$
config.mapAuthRecord.regex.replacement = ${user}${at}${suffix}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GRABR44DIQZUGHDKACCREH6GXR4TRXGE/


[ovirt-users] AAA Extension mapping could not be found

2020-09-21 Thread dominique . deschenes
I tried to use AAA mapping. but I have this message

Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.aaa.AAAServiceImpl run
INFO: Iteration: 0
Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.core.ExtensionsToolExecutor 
main
SEVERE: Extension mapping could not be found
Sep 21, 2020 2:12:05 PM org.ovirt.engine.exttool.core.ExtensionsToolExecutor 
main
FINE: Exception:
org.ovirt.engine.core.extensions.mgr.ConfigurationException: Extension mapping 
could not be found
at 
org.ovirt.engine.core.extensions-manager//org.ovirt.engine.core.extensions.mgr.ExtensionsManager.getExtensionByName(ExtensionsManager.java:286)
at 
org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl$AAAProfile.(AAAServiceImpl.java:846)
at 
org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl$Action.lambda$static$3(AAAServiceImpl.java:154)
at 
org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl$Action.execute(AAAServiceImpl.java:417)
at 
org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.aaa.AAAServiceImpl.run(AAAServiceImpl.java:686)
at 
org.ovirt.engine.core.extensions-tool//org.ovirt.engine.exttool.core.ExtensionsToolExecutor.main(ExtensionsToolExecutor.java:121)
at org.jboss.modules.Module.run(Module.java:352)
at org.jboss.modules.Module.run(Module.java:320)
at org.jboss.modules.Main.main(Main.java:593)


it's like it can't find for my mapping.properties file

I followed the Howto on this page.
https://www.ovirt.org/develop/release-management/features/infra/aaa_faq.html

Here my config : 

tail /etc/ovirt-engine/extensions.d/local.lan-authn.properties

ovirt.engine.aaa.authn.authz.plugin ovirt.engine.extension.name = 
local.lan-authn
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = 
org.ovirt.engine.extension.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class = 
org.ovirt.engine.extension.aaa.ldap.AuthnExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
ovirt.engine.aaa.authn.profile.name = local.lan
ovirt.engine.aaa.authn.authz.plugin = local.lan
ovirt.engine.aaa.authn.mapping.plugin = mapping
config.profile.file.1 = ../aaa/local.lan.properties

tail /etc/ovirt-engine/extensions.d/mapping.properties
ovirt.engine.extension.name = mapping
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module = 
org.ovirt.engine-extensions.aaa.misc
ovirt.engine.extension.binding.jbossmodule.class = 
org.ovirt.engineextensions.aaa.misc.mapping.MappingExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Mapping
config.mapUser.type = regex
config.mapUser.regex.pattern = ^(?[^@]*)$
config.mapUser.regex.replacement = ${user}@domain.com
config.mapUser.regex.mustMatch = false
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5E36YKSOMGYDQFFIU6I2BEYTCPQK7S4Q/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 8:37 PM penguin pages  wrote:
>
>
> I pasted old / file path not right example above.. But here is a cleaner 
> version with error i am trying to root cause
>
> [root@odin vmstore]# python3 
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
> https://ovirte01.penguinpages.local/ --username admin@internal 
> --password-file /gluster_bricks/vmstore/vmstore/.ovirt.password --cafile 
> /gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name vmstore 
> --disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
> Checking image...
> Image format: qcow2
> Disk format: cow
> Disk content type: data
> Disk provisioned size: 21474836480
> Disk initial size: 431751168
> Disk name: ns01.qcow2
> Disk backup: False
> Connecting...
> Creating disk...
> Traceback (most recent call last):
>   File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", 
> line 262, in 
> name=args.sd_name
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7697, 
> in add
> return self._internal_add(disk, headers, query, wait)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, 
> in _internal_add
> return future.wait() if wait else future
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in 
> wait
> return self._code(response)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, 
> in callback
> self._check_fault(response)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, 
> in _check_fault
> self._raise_error(response, body)
>   File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, 
> in _raise_error
> raise error
> ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault detail is 
> "Entity not found: vmstore". HTTP response code is 404.

You used:

--sd-name vmstore

But there is no such storage domain in this setup.

Check the storage domains on this setup. One (ugly) way is is:

$ curl -s -k --user admin@internal:password
https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |
grep ''
export1
iscsi1
iscsi2
nfs1
nfs2
ovirt-image-repository

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7K2LBSN5POKIKYQ3CXJZEJQCGNG26VFV/


[ovirt-users] Re: oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Jayme
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host

On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise  wrote:

>
> oVirt engine shows  one of the gluster servers having an issue.  I did a
> graceful shutdown of all three nodes over weekend as I have to move around
> some power connections in prep for UPS.
>
> Came back up.. but
>
> [image: image.png]
>
> And this is reflected in 2 bricks online (should be three for each volume)
> [image: image.png]
>
> Command line shows gluster should be happy.
>
> [root@thor engine]# gluster peer status
> Number of Peers: 2
>
> Hostname: odinst.penguinpages.local
> Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
> State: Peer in Cluster (Connected)
>
> Hostname: medusast.penguinpages.local
> Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
> State: Peer in Cluster (Connected)
> [root@thor engine]#
>
> # All bricks showing online
> [root@thor engine]# gluster volume status
> Status of volume: data
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 11001
> Brick odinst.penguinpages.local:/gluster_br
> icks/data/data  49152 0  Y
> 2970
> Brick medusast.penguinpages.local:/gluster_
> bricks/data/data49152 0  Y
> 2646
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume data
>
> --
> There are no active volume tasks
>
> Status of volume: engine
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 11012
> Brick odinst.penguinpages.local:/gluster_br
> icks/engine/engine  49153 0  Y
> 2982
> Brick medusast.penguinpages.local:/gluster_
> bricks/engine/engine49153 0  Y
> 2657
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume engine
>
> --
> There are no active volume tasks
>
> Status of volume: iso
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 151426
> Brick odinst.penguinpages.local:/gluster_br
> icks/iso/iso49156 49157  Y
> 69225
> Brick medusast.penguinpages.local:/gluster_
> bricks/iso/iso  49156 49157  Y
> 45018
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
>
> Task Status of Volume iso
>
> --
> There are no active volume tasks
>
> Status of volume: vmstore
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick thorst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 11023
> Brick odinst.penguinpages.local:/gluster_br
> icks/vmstore/vmstore49154 0  Y
> 2993
> Brick medusast.penguinpages.local:/gluster_
> bricks/vmstore/vmstore  49154 0  Y
> 2668
> Self-heal Daemon on localhost   N/A   N/AY
> 50560
> Self-heal Daemon on medusast.penguinpages.l
> ocalN/A   N/AY
> 2475
> Self-heal Daemon on odinst.penguinpages.loc
> al  N/A   N/AY
> 3004
>
> Task Status of Volume vmst

[ovirt-users] oVirt - Gluster Node Offline but Bricks Active

2020-09-21 Thread Jeremey Wise
oVirt engine shows  one of the gluster servers having an issue.  I did a
graceful shutdown of all three nodes over weekend as I have to move around
some power connections in prep for UPS.

Came back up.. but

[image: image.png]

And this is reflected in 2 bricks online (should be three for each volume)
[image: image.png]

Command line shows gluster should be happy.

[root@thor engine]# gluster peer status
Number of Peers: 2

Hostname: odinst.penguinpages.local
Uuid: 83c772aa-33cd-430f-9614-30a99534d10e
State: Peer in Cluster (Connected)

Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@thor engine]#

# All bricks showing online
[root@thor engine]# gluster volume status
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/data/data  49152 0  Y
11001
Brick odinst.penguinpages.local:/gluster_br
icks/data/data  49152 0  Y
2970
Brick medusast.penguinpages.local:/gluster_
bricks/data/data49152 0  Y
2646
Self-heal Daemon on localhost   N/A   N/AY
50560
Self-heal Daemon on odinst.penguinpages.loc
al  N/A   N/AY
3004
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY
2475

Task Status of Volume data
--
There are no active volume tasks

Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/engine/engine  49153 0  Y
11012
Brick odinst.penguinpages.local:/gluster_br
icks/engine/engine  49153 0  Y
2982
Brick medusast.penguinpages.local:/gluster_
bricks/engine/engine49153 0  Y
2657
Self-heal Daemon on localhost   N/A   N/AY
50560
Self-heal Daemon on odinst.penguinpages.loc
al  N/A   N/AY
3004
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY
2475

Task Status of Volume engine
--
There are no active volume tasks

Status of volume: iso
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/iso/iso49156 49157  Y
151426
Brick odinst.penguinpages.local:/gluster_br
icks/iso/iso49156 49157  Y
69225
Brick medusast.penguinpages.local:/gluster_
bricks/iso/iso  49156 49157  Y
45018
Self-heal Daemon on localhost   N/A   N/AY
50560
Self-heal Daemon on odinst.penguinpages.loc
al  N/A   N/AY
3004
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY
2475

Task Status of Volume iso
--
There are no active volume tasks

Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick thorst.penguinpages.local:/gluster_br
icks/vmstore/vmstore49154 0  Y
11023
Brick odinst.penguinpages.local:/gluster_br
icks/vmstore/vmstore49154 0  Y
2993
Brick medusast.penguinpages.local:/gluster_
bricks/vmstore/vmstore  49154 0  Y
2668
Self-heal Daemon on localhost   N/A   N/AY
50560
Self-heal Daemon on medusast.penguinpages.l
ocalN/A   N/AY
2475
Self-heal Daemon on odinst.penguinpages.loc
al  N/A   N/AY
3004

Task Status of Volume vmstore
--
There are no active volume tasks

[root@thor engine]# gluster volume heal
data engine   iso  vmstore
[root@thor engine]# gluster volume heal data info
Brick thorst.penguinpages.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0

Brick odinst.penguinpages.local:/gluster_bricks/data/data
Status: Connected
Number of entries: 0

Brick medusast.p

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread penguin pages

I pasted old / file path not right example above.. But here is a cleaner 
version with error i am trying to root cause

[root@odin vmstore]# python3 
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py --engine-url 
https://ovirte01.penguinpages.local/ --username admin@internal --password-file 
/gluster_bricks/vmstore/vmstore/.ovirt.password --cafile 
/gluster_bricks/vmstore/vmstore/.ovirte01_pki-resource.cer --sd-name vmstore 
--disk-sparse /gluster_bricks/vmstore/vmstore/ns01.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 21474836480
Disk initial size: 431751168
Disk name: ns01.qcow2
Disk backup: False
Connecting...
Creating disk...
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
262, in 
name=args.sd_name
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 7697, 
in add
return self._internal_add(disk, headers, query, wait)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in 
_internal_add
return future.wait() if wait else future
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in 
wait
return self._code(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in 
callback
self._check_fault(response)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in 
_check_fault
self._raise_error(response, body)
  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in 
_raise_error
raise error
ovirtsdk4.NotFoundError: Fault reason is "Operation Failed". Fault detail is 
"Entity not found: vmstore". HTTP response code is 404.
[root@odin vmstore]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EMLWBA4FSQUMPH4CTAXSADIKD46PDQQZ/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread penguin pages
Thanks for reply.  I read this late at night and assumed the "engine url" meant 
old KVM. system .. but this implies the oVirt engine.  I then translated your 
helpful notes... but likely missing some parameter.

#
# Install import client
dnf install ovirt-imageio-client python3-ovirt-engine-sdk4

# save oVirt engine cert on gluster share (have to use GUI for now as I could 
not figure out wget means)
https://ovirte01.penguinpages.local/ovirt-engine/



mv /gluster_bricks/engine/engine/ovirte01_pki-resource.cer 
/gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
chmod 440 /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
chown root:kvm /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer
# Put oVirt Password in a file for use
echo "blahblahblah" > /gluster_bricks/engine/engine/.ovirt.password
chmod 440 /gluster_bricks/engine/engine/.ovirt.password
chown root:kvm /gluster_bricks/engine/engine/.ovirt.password
# upload the qcow2 images to oVirt
[root@odin vmstore]# pwd
/gluster_bricks/vmstore/vmstore
[root@odin vmstore]# ls -alh
total 385M
drwxr-xr-x.   7 vdsm kvm  8.0K Sep 21 13:20 .
drwxr-xr-x.   3 root root   21 Sep 16 23:42 ..
-rw-r--r--.   1 root root0 Sep 21 13:20 example.log
drwxr-xr-x.   6 vdsm kvm64 Sep 17 21:28 f118dcae-6162-4e9a-89e4-f30ffcfb9ccf
drw---. 262 root root 8.0K Sep 17 01:29 .glusterfs
drwxr-xr-x.   2 root root   45 Sep 17 08:15 isos
-rwxr-xr-x.   2 root root  64M Sep 17 00:08 ns01_20200910.tgz
-rw-rw.   2 qemu qemu  64M Sep 17 11:20 ns01.qcow2
-rw-rw.   2 qemu qemu  64M Sep 17 13:34 ns01_var.qcow2
-rwxr-xr-x.   2 root root  64M Sep 17 00:09 ns02_20200910.tgz
-rw-rw.   2 qemu qemu  64M Sep 17 11:21 ns02.qcow2
-rw-rw.   2 qemu qemu  64M Sep 17 13:34 ns02_var.qcow2
drwxr-xr-x.   2 root root   38 Sep 17 10:19 qemu
drwxr-xr-x.   3 root root 280K Sep 21 08:21 .shard
[root@odin vmstore]# python3 
/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
> --engine-url https://ovirte01.penguinpages.local/ \
> --username admin@internal \
> --password-file /gluster_bricks/engine/engine/.ovirt.password \
> --cafile /gluster_bricks/engine/engine/.ovirte01_pki-resource.cer \
> --sd-name vmstore \
> --disk-sparse \
> /gluster_bricks/vmstore/vmstore.qcow2
Checking image...
qemu-img: Could not open '/gluster_bricks/vmstore/vmstore.qcow2': Could not 
open '/gluster_bricks/vmstore/vmstore.qcow2': No such file or directory
Traceback (most recent call last):
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
210, in 
image_info = get_image_info(args.filename)
  File "/usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py", line 
133, in get_image_info
["qemu-img", "info", "--output", "json", filename])
  File "/usr/lib64/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
  File "/usr/lib64/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['qemu-img', 'info', '--output', 
'json', '/gluster_bricks/vmstore/vmstore.qcow2']' returned non-zero exit status 
1.
[root@odin vmstore]#
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LA4CAJLVHZKU4ZNXGMWUBVRP37PS5OBL/


[ovirt-users] Fail install SHE ovirt-engine from backupfile (4.3 -> 4.4)

2020-09-21 Thread francesco--- via Users
Hi Everyone,

In a test environment I'm trying to deploy a single node self hosted engine 4.4 
on CentOS 8 from a 4.3 backup. The actual setup is:
- node1 with CentOS7, oVirt 4.3 with a working SH engine. The data domain is a 
local NFS;
- node2 with CentOS8, where we are triyng to deploy the engine starting from 
the node1 engine backup
- host1, with CentOS78, running a couple of VMs (4.3)

I'm following the guide: 
https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_to_4-4_4-3_SHE
Everything seems working fine, the engine on the node1 is in maintenance:global 
mode and the ovirt-engine service i stopped. The deploy on the node2 stucks in 
the following error:

TASK [ovirt.hosted_engine_setup : Wait for OVF_STORE disk content]

[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db 
volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': 
"vdsm-client: Command Image.prepare with args {'storagepoolID': 
'06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': 
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db',
'volumeID': '06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:\n(code=309, 
message=Unknown pool id, pool not connected: 
('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar 
archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in 
archive\ntar: Exiting with failure status due to previous errors", 'rc': 2, 
'start': '2020-09-21 17:14:17.293090', 'end': '2020-09-21 17:14:17.644253', 
'delta': '0:00:00.351163', 'changed': True, 'failed': True, 'invocation': 
{'module_args': {'warn': False, '_raw_params': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=e48a66dd-74c9-43eb-890e-778e9c4ee8db 
volumeID=06bb5f34-112d-4214-91d2-53d0bdb84321 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True, 
'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': 
None, 'executable
 ': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': 
[], 'stderr_lines': ["vdsm-client: Command Image.prepare with args 
{'storagepoolID': '06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': 
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db', 'volumeID': 
'06bb5f34-112d-4214-91d2-53d0bdb84321'} failed:", "(code=309, message=Unknown 
pool id, pool not connected: ('06c58622-f99b-11ea-9122-00163e1bbc93',))", 'tar: 
This does not look like a tar archive', 'tar: 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in archive', 'tar: Exiting 
with failure status due to previous errors'], '_ansible_no_log': False, 
'attempts':
12, 'item': {'name': 'OVF_STORE', 'image_id': 
'06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}, 'ansible_loop_var': 'item', 
'_ansible_item_label': {'name': 'OVF_STORE', 'image_id': 
'06bb5f34-112d-4214-91d2-53d0bdb84321', 'id': 
'e48a66dd-74c9-43eb-890e-778e9c4ee8db'}}
[ ERROR ] {'msg': 'non-zero return code', 'cmd': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=750428bd-1273-467f-9b27-7f6fe58a446c 
volumeID=1c89c678-f883-4e61-945c-5f7321add343 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", 'stdout': '', 'stderr': 
"vdsm-client: Command Image.prepare with args {'storagepoolID': 
'06c58622-f99b-11ea-9122-00163e1bbc93', 'storagedomainID': 
'2a4a3cce-f2f6-4ddd-b337-df5ef562f520', 'imageID': 
'750428bd-1273-467f-9b27-7f6fe58a446c',
'volumeID': '1c89c678-f883-4e61-945c-5f7321add343'} failed:\n(code=309, 
message=Unknown pool id, pool not connected: 
('06c58622-f99b-11ea-9122-00163e1bbc93',))\ntar: This does not look like a tar 
archive\ntar: 6023764f-5547-4b23-92ca-422eafdf3f87.ovf: Not found in 
archive\ntar: Exiting with failure status due to previous errors", 'rc': 2, 
'start': '2020-09-21 17:16:26.030343', 'end': '2020-09-21 17:16:26.381862', 
'delta': '0:00:00.351519', 'changed': True, 'failed': True, 'invocation': 
{'module_args': {'warn': False, '_raw_params': "vdsm-client Image prepare 
storagepoolID=06c58622-f99b-11ea-9122-00163e1bbc93 
storagedomainID=2a4a3cce-f2f6-4ddd-b337-df5ef562f520 
imageID=750428bd-1273-467f-9b27-7f6fe58a446c 
volumeID=1c89c678-f883-4e61-945c-5f7321add343 | grep path | awk '{ print $2 }' 
| xargs -I{} sudo -u vdsm dd if={} | tar -tvf - 
6023764f-5547-4b23-92ca-422eafdf3f87.ovf", '_uses_shell': True, 
'stdin_ad

[ovirt-users] Re: oVirt 4.4.2 is now generally available

2020-09-21 Thread Gianluca Cecchi
On Mon, Sep 21, 2020 at 5:26 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno lun 21 set 2020 alle ore 17:13 Gianluca Cecchi <
> gianluca.cec...@gmail.com> ha scritto:
>
>>
>> On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde  wrote:
>>
>>> The oVirt project is excited to announce the general availability of
>>> oVirt 4.4.2 , as of September 17th, 2020.
>>>
>>>
>>>
>> [snip]
>>
>>> oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
>>> will be released separately due to a blocker issue (Bug 1837864
>>> ).
>>>
>>>
>>> [snip]
>>
>> hi,
>> will you post an update to the list when the iso is available?
>>
>
>
> Sure, we'll issue a release announcement for the oVirt Node update as soon
> as we get the bug fixed.
>
>
> OK, thanks.

But indeed there is something strange.
Suppose I have an 4.4.0 node (when CentOS 8.2 not already released and
there was the pre-release), can I update to 4.4.2?

If I execute "yum update" on this system I get:

 [root@ovirt01 ~]# yum update
Extra Packages for Enterprise Linux 8 - x86_64   49
kB/s |  32 kB 00:00
CentOS-8 - Gluster 7 14
kB/s | 3.0 kB 00:00
virtio-win builds roughly matching what will be shipped in upcoming 5.6
kB/s | 3.0 kB 00:00
Copr repo for EL8_collection owned by sbonazzo  8.2
kB/s | 3.6 kB 00:00
Copr repo for gluster-ansible owned by sac  9.4
kB/s | 3.3 kB 00:00
Copr repo for ovsdbapp owned by mdbarroso   9.6
kB/s | 3.3 kB 00:00
Copr repo for nmstate-stable owned by nmstate   8.5
kB/s | 3.3 kB 00:00
Copr repo for NetworkManager-1.22 owned by networkmanager   8.3
kB/s | 3.3 kB 00:00
Advanced Virtualization packages for x86_64  36
kB/s | 3.0 kB 00:00
CentOS-8 - oVirt 4.4 19
kB/s | 3.0 kB 00:00
CentOS-8 - OpsTools - collectd   23
kB/s | 3.0 kB 00:00
Latest oVirt 4.4 Release3.9
kB/s | 3.0 kB 00:00
Dependencies resolved.

 PackageArchitecture   Version
 Repository Size

Installing:
 ovirt-node-ng-image-update noarch 4.4.1.5-1.el8
 ovirt-4.4 781 M
 replacing  ovirt-node-ng-image-update-placeholder.noarch 4.4.0-2.el8

Transaction Summary

Install  1 Package

Total download size: 781 M
Is this ok [y/N]:

And another guy I'm in contact with, who already has 4.4.1 installed, get
no updates available
Does this mean that even if 4.4.2 has been released (at which level at this
point), an ovirt node cannot be upgraded?
And a plain CentOS host instead, could it be updated?

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQGS3KSEQCF6CIQUEDF2UQ5SEHMDQ63J/


[ovirt-users] Re: oVirt 4.4.2 is now generally available

2020-09-21 Thread Sandro Bonazzola
Il giorno lun 21 set 2020 alle ore 17:13 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

>
> On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde  wrote:
>
>> The oVirt project is excited to announce the general availability of
>> oVirt 4.4.2 , as of September 17th, 2020.
>>
>>
>>
> [snip]
>
>> oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only) will
>> be released separately due to a blocker issue (Bug 1837864
>> ).
>>
>>
>> [snip]
>
> hi,
> will you post an update to the list when the iso is available?
>


Sure, we'll issue a release announcement for the oVirt Node update as soon
as we get the bug fixed.


> Thanks,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGYBEG4KGUJRZJZCDNSHPE75DPUVDRKK/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DA533A4ZM2KQUPMV5UMPQFSX5A7PZUBL/


[ovirt-users] Re: oVirt 4.4.2 is now generally available

2020-09-21 Thread Gianluca Cecchi
On Mon, Sep 21, 2020 at 5:12 PM Gianluca Cecchi 
wrote:

>
> On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde  wrote:
>
>> The oVirt project is excited to announce the general availability of
>> oVirt 4.4.2 , as of September 17th, 2020.
>>
>>
>>
> [snip]
>
>> oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only) will
>> be released separately due to a blocker issue (Bug 1837864
>> ).
>>
>>
>> [snip]
>
> hi,
> will you post an update to the list when the iso is available?
> Thanks,
> Gianluca
>
>
BTW: the iso blocker bug is indeed related to upgrade failure and host
entering into emergency mode...
It is not clear to me if it is ok and safe to update from 4.4.0 (deployed
as node ng sytem) to 4.4.2 (involving install of the new .img filelayer) if
the bug is not completely solved

Can anyone shed some light, please?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2SWESDW3GVXGXAWDUT673JJU3EK7PLHD/


[ovirt-users] Re: oVirt 4.4.2 is now generally available

2020-09-21 Thread Gianluca Cecchi
On Thu, Sep 17, 2020 at 4:06 PM Lev Veyde  wrote:

> The oVirt project is excited to announce the general availability of oVirt
> 4.4.2 , as of September 17th, 2020.
>
>
>
[snip]

> oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only) will
> be released separately due to a blocker issue (Bug 1837864
> ).
>
>
> [snip]

hi,
will you post an update to the list when the iso is available?
Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGYBEG4KGUJRZJZCDNSHPE75DPUVDRKK/


[ovirt-users] Re: VM stuck in "reboot in progress" ("virtual machine XXX should be running in a host but it isn't.").

2020-09-21 Thread Arik Hadas
On Sun, Sep 20, 2020 at 11:21 AM Gilboa Davara  wrote:

> On Sat, Sep 19, 2020 at 7:44 PM Arik Hadas  wrote:
> >
> >
> >
> > On Fri, Sep 18, 2020 at 8:27 AM Gilboa Davara  wrote:
> >>
> >> Hello all (and happy new year),
> >>
> >> (Note: Also reported as
> https://bugzilla.redhat.com/show_bug.cgi?id=1880251)
> >>
> >> Self hosted engine, single node, NFS.
> >> Attempted to install CentOS over an existing Fedora VM with one host
> >> device (USB printer).
> >> Reboot failed, trying to boot from a non-existent CDROM.
> >> Tried shutting the VM down, failed.
> >> Tried powering off the VM, failed.
> >> Dropped cluster to global maintenance, reboot host + engine (was
> >> planning to upgrade it anyhow...), VM still stuck.
> >>
> >> When trying to power off the VM, the following message can be found
> >> the in engine.log:
> >> 2020-09-18 07:58:51,439+03 INFO
> >> [org.ovirt.engine.core.bll.StopVmCommand]
> >> (EE-ManagedThreadFactory-engine-Thread-42)
> >> [7bc4ac71-f0b2-4af7-b081-100dc99b6123] Running command: StopVmCommand
> >> internal: false. Entities affected :  ID:
> >> b411e573-bcda-4689-b61f-1811c6f03ad5 Type: VMAction group STOP_VM with
> >> role type USER
> >> 2020-09-18 07:58:51,441+03 WARN
> >> [org.ovirt.engine.core.bll.StopVmCommand]
> >> (EE-ManagedThreadFactory-engine-Thread-42)
> >> [7bc4ac71-f0b2-4af7-b081-100dc99b6123] Strange, according to the
> >> status 'RebootInProgress' virtual machine
> >> 'b411e573-bcda-4689-b61f-1811c6f03ad5' should be running in a host but
> >> it isn't.
> >> 2020-09-18 07:58:51,594+03 ERROR
> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> (EE-ManagedThreadFactory-engine-Thread-42)
> >> [7bc4ac71-f0b2-4af7-b081-100dc99b6123] EVENT_ID:
> >> USER_FAILED_STOP_VM(56), Failed to power off VM kids-home-srv (Host:
> >> , User: gilboa@internal-authz).
> >>
> >> My question is simple: Pending a solution to the bug, can I somehow
> >> drop the state of the VM? It's currently holding a sizable disk image
> >> and a USB device I need (printer).
> >
> >
> > It would be best to modify the VM as if it should still be running on
> the host and let the system discover that it's not running there and update
> the VM accordingly.
> >
> > You can do it by changing the database with:
> > update vm_dynamic set run_on_vds='82f92946-9130-4dbd-8663-1ac0b50668a1'
> where vm_guid='b411e573-bcda-4689-b61f-1811c6f03ad5';
> >
> >
> >>
> >>
> >> As it's my private VM cluster, I have no problem dropping the site
> >> completely for maintenance.
> >>
> >> Thanks,
> >>
> >> Gilboa
>
>
> Hello,
>
> Thanks for the prompt answer.
>
> Edward,
>
> Full reboot of both engine and host didn't help.
> Most likely there's a consistency problem in the oVirt DB.
>
> Arik,
>
> To which DB I should connect and as which user?
> E.g. psql -U user db_name
>

To the 'engine' database.
I usually connect to it by switching to the 'postgres' user as Strahil
described.


>
> Thanks again,
> - Gilboa
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z3NF7BKSZFBPZ6ZIN6PICTXJVO3A4Q3A/


[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Jeremey Wise
Ugh.. this is bad.

On the hypervisor where the files are located ...

My customers send me tar files with VMs all the time.   And I send them.
This will make it much more difficult, if I can't import xml / qcow2



This cluster .. is my home cluster and so..  three servers.. and they were
CentOs 7 + VDO + Gluster...  I use to have link qemu directory from all
three on gluster so if one server died .. or I messed it up and it was
needing repair... I could still start up and run the VMs.

Old cluster notes:

 Optional: Redirect Default KVM VM Storage location.  Ex:  /data/gv0/vms
on thor

# < Broken with HCI.. not sure process here yet….. hold off till oVirt
HCI engine issues worked out on how it enables new VM definitions to be
shared if one or more nodes goes down  2020-09-17 

#  Pool default XML configuration edited.

virsh pool-edit default



  default

  d3ae9e9a-8bc8-4a17-8476-3fe3334204f3

  37734498304

  27749486592

  9985011712

  

  

  

#   /var/lib/libvirt/images

/data/gv0/vms



  0711

  0

  0



  





#  For now each KVM host has shared folder linked.  Not sure how with out
restart of libvirtd to get peers to easily see configuration file. Can run
import command but need to test.

# To enable multiple KVM nodes in a shared environment to be able to take
over the roles of peers in the event of one failing the XML files stored in
/etc/libvirt/qemu/  need to be on a shared device.

# Ex:  Move medusa /etc/libvirt/qemu/   to be on gluster share volume space
/data/gv0/vms/medusa

systemctl stop libvirtd

mkdir -p /media/vmstore/qemu

mv -f /etc/libvirt/qemu/* /media/vmstore/qemu

ln -s /media/vmstore/qemu /etc/libvirt/qemu



systemctl daemon-reload

systemctl start libvirt-guests.service

systemctl enable libvirt-guests.service

systemctl status libvirt-guests.service



As I tried to use setup of engine it became apparent, my manual use of
libvirtd setup was NOT going to be any way helpful with oVirt way of using
it...  Ok... I can learn new things..


I had to backup and remove all data (see other post about errors for HCI
wizard failing if it detected existing VDO volume)...  So I moved my four
or so important VMs off to an external mount.

I now need a way to bring them back.  I really can't spend weeks rebuilding
those infrastructure VMs.  And I don't have a fourth server to rebuild
hosting KVM system to import and then with oVirt to LibVirt connection..
slurp vm out.
Plus.. that means anytime someone sends me a tar of qcow2 and xml..  I have
to re-host to export..  :P



On Mon, Sep 21, 2020 at 8:18 AM Nir Soffer  wrote:

> On Mon, Sep 21, 2020 at 9:11 AM Jeremey Wise 
> wrote:
> >
> >
> > I rebuilt my lab environment.   And their are four or five VMs that
> really would help if I did not have to rebuild.
> >
> > oVirt as I am now finding when it creates infrastructure, sets it out
> such that I cannot just use older  means of placing .qcow2 files in folders
> and .xml files in other folders and they show up on services restarting.
> >
> > How do I import VMs from files?
>
> You did not share the oVirt version, so I'm assuming 4.4.
>
> The simplest way is to upload the qcow2 images to oVirt, and create a new
> VM with the new disk.
>
> On the hypervisor where the files are located, install the required
> packages:
>
> dnf install ovirt-imageio-client python3-ovirt-engine-sdk4
>
> And upload the image:
>
> python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
> --engine-url https://my.engine/ \
> --username admin@internal \
> --password-file /path/to/password/file \
> --cafile /path/to/cafile \
> --sd-name my-storage-domain \
> --disk-sparse \
> /path/to/image.qcow2
>
> This will upload the file in qcow2 format to whatever type of storage you
> have. You can change the format if you like using --disk-format. See --help
> for all the options.
>
> We also support importing from libvirt, but for this you need to have the
> vm
> defined in libvirt. If you don't have this, It will probably be easier to
> upload
> the images and create a new vm in oVirt.
>
> Nir
>
> > I found this article but implies VM is running:
> https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt.html
> >
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers#Adding_KVM_as_an_External_Provider
> >
> > I need a way to import a file.  Even if it means temporarily hosting on
> "KVM on one of the hosts to then bring in once it is up.
> >
> >
> > Thanks
> > --
> >
> > penguinpages
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:

[ovirt-users] Gluster Domain Storage full

2020-09-21 Thread suporte
Hello, 

I'm running oVirt Version 4.3.4.3-1.el7. 
I have a small GlusterFS Domain storage brick on a dedicated filesystem serving 
only one VM. 
The VM filled all the Domain storage. 
The Linux filesystem has 4.1G available and 100% used, the mounted brick has 
0GB available and 100% used 

I can not do anything with this disk, for example, if I try to move it to 
another Gluster Domain Storage get the message: 

Error while executing action: Cannot move Virtual Disk. Low disk space on 
Storage Domain 

Any idea? 

Thanks 

-- 

Jose Ferradeira 
http://www.logicworks.pt 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WFN2VOQZPPVCGXAIFEYVIDEVJEUCSWY7/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 3:30 PM Jeremey Wise  wrote:
>
>
> Old System
> Three servers..  Centos 7 -> Lay down VDO (dedup / compression) add those VDO 
> volumes as bricks to gluster.
>
> New cluster (remove boot drives and run wipe of all data drives)
>
> Goal: use first 512GB Drives to ignite the cluster and get things on feet and 
> stage infrastructure things.  Then use one of the 1TB drives in each server 
> for my "production" volume.  And second 1TB drive in each server as staging.  
> I want to be able to "learn" and not loose days / weeks of data... so disk 
> level rather give up capacity for sake of "oh.. well .. that messed up.. 
> rebuild.
>
> After minimal install.  Setup of network..  run HCI wizard.
>
> It failed various times along build... lack SELInux permissive, .. did not 
> wipe 1TB drives with hope of importing old Gluster file system / VDO voluemes 
> to import my five or six custom and important VMs. (OCP cluster bootstrap 
> environment, Plex servers, DNS / DHCP / Proxy HA cluster nodes et)
>
> Gave up on too many HCI failures about disk.. so wiped drives (will use 
> external NAS to repopulate important VMs back (or so is plan... see other 
> posting on no import of qcow2 images / xml :P )
>
> Ran into next batch of issues about use of true device ID  ... as name too 
> long... but /dev/sd?  makes me nervious as I have seen many systems with 
> issues when they use this old and should be depricated means to address disk 
> ID:  use UUID  or raw ID... 
> "/dev/disk/by-id/ata-Samsung_SSD_850_PRO_512GB_S250NXAGA15787L
>
> Started getting errors about HCI failing with "excluded by filter" errors.

I'm not sure I follow your long story, but this error is caused by too
strict lvm
filter in /etc/lvm/lvm.conf.

Edit this file and remove the line that looks like this:

filter = 
["a|^/dev/disk/by-id/lvm-pv-uuid-80ovnb-mZIO-J65Y-rl9n-YAY7-h0Q9-Aezk8D$|",
"r|.*|"]

Then install gluster, it will stop complaining about the filter.

At the end of the installation, you are going to add the hosts to
engine. At this point
a new lvm filter will be created, considering all the mounted logical volumes.

Maybe gluster setup should warn about lvm filter or remove it before
the installation.

> wiped drives ( gdisk /dev/sd?  => x => z => y => y)
>
> filters errors I could not fiture out what they were.. .. error of "filter 
> exists"  to me meant ..  you have one.. remove it so I can remove drive.
>
> Did full dd if=/dev/zero of=dev/sd? ..  still same issue
> filtered in multipath just for grins still same issue.
>
> Posted to forums.. nobody had ideas 
> https://forums.centos.org/viewtopic.php?f=54&t=75687   Posted to slack 
> gluster channel.. they looked at it and could not figure out...
>
> Wiped systems.. started over.   This time the HCI wizard deployed.
>
> My guess... is once I polished setup to make sure wizard did not attempt 
> before SELinux set to permissive (vs disable)  drives all wiped (even though 
> they SHOULD just be ignored..  I I think VDO scanned and saw VDO definition 
> on drive so freeked some ansible wizard script out).
>
> Now cluster is up..  but then went to add "production"  gluster +VDO and 
> "staging"  gluster + vdo volumes... and having issues.
>
> Sorry for long back story but I think this will add color to issues.
>
> My Thoughts as to root issues
> 1) HCI wizard has issues just using drives told, and ignoring other data 
> drives in system ... VDO as example I saw notes about failed attempt ... but 
> it should not have touched that volume... just used one it needed and igored 
> rest.
> 2) HCI wizard bug of ignoring user set /dev/sd?  for each server again, was 
> another failure attempt where clean up may not have run. (noted this in 
> posting about manual edit .. and apply button :P to ingest)
> 3) HCI wizard bug of name I was using of device ID vs /sd?  which is IMAO ... 
> bad form.. but name too long.. again. another cleanup where things may not 
> have fully cleaned.. or I forgot to click clean ...  where system was left in 
> non-pristine state
> 2) HCI wizard does NOT clean itself up properly if it fails ... or when I ran 
> clean up, maybe it did not complete and I closed wizard which then created 
> this orphaned state.
> 3) HCI Setup and post setup needs to add filtering
>
>
>   With a perfect and pristine process  .. it ran.   But only when all other 
> learning and requirements to get it just right were setup first.  oVirt HCI 
> is S very close to being a great platform , well thought out and 
> production class.  Just needs some more nerds beating on it to find these 
> cracks, and get the GUI and setup polished.
>
> My $0.002
>
>
> On Mon, Sep 21, 2020 at 8:06 AM Nir Soffer  wrote:
>>
>> On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise  wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > vdo: ERROR - Device /dev/sdc excluded by a filter
>> >
>> >
>> >
>> >
>> >
>> > Other server
>> >
>> > vdo: ERROR - Device 
>> > /dev/mapper/nvme.126f-4141303030

[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Jeremey Wise
Old System
Three servers..  Centos 7 -> Lay down VDO (dedup / compression) add those
VDO volumes as bricks to gluster.

New cluster (remove boot drives and run wipe of all data drives)

Goal: use first 512GB Drives to ignite the cluster and get things on feet
and stage infrastructure things.  Then use one of the 1TB drives in each
server for my "production" volume.  And second 1TB drive in each server as
staging.  I want to be able to "learn" and not loose days / weeks of
data... so disk level rather give up capacity for sake of "oh.. well ..
that messed up.. rebuild.

After minimal install.  Setup of network..  run HCI wizard.

It failed various times along build... lack SELInux permissive, .. did not
wipe 1TB drives with hope of importing old Gluster file system / VDO
voluemes to import my five or six custom and important VMs. (OCP cluster
bootstrap environment, Plex servers, DNS / DHCP / Proxy HA cluster nodes
et)

Gave up on too many HCI failures about disk.. so wiped drives (will use
external NAS to repopulate important VMs back (or so is plan... see other
posting on no import of qcow2 images / xml :P )

Ran into next batch of issues about use of true device ID  ... as name too
long... but /dev/sd?  makes me nervious as I have seen many systems with
issues when they use this old and should be depricated means to address
disk ID:  use UUID  or raw ID...
"/dev/disk/by-id/ata-Samsung_SSD_850_PRO_512GB_S250NXAGA15787L

Started getting errors about HCI failing with "excluded by filter" errors.

wiped drives ( gdisk /dev/sd?  => x => z => y => y)

filters errors I could not fiture out what they were.. .. error of "filter
exists"  to me meant ..  you have one.. remove it so I can remove drive.

Did full dd if=/dev/zero of=dev/sd? ..  still same issue
filtered in multipath just for grins still same issue.

Posted to forums.. nobody had ideas
https://forums.centos.org/viewtopic.php?f=54&t=75687   Posted to slack
gluster channel.. they looked at it and could not figure out...

Wiped systems.. started over.   This time the HCI wizard deployed.

My guess... is once I polished setup to make sure wizard did not attempt
before SELinux set to permissive (vs disable)  drives all wiped (even
though they SHOULD just be ignored..  I I think VDO scanned and saw VDO
definition on drive so freeked some ansible wizard script out).

Now cluster is up..  but then went to add "production"  gluster +VDO and
"staging"  gluster + vdo volumes... and having issues.

Sorry for long back story but I think this will add color to issues.

My Thoughts as to root issues
1) HCI wizard has issues just using drives told, and ignoring other data
drives in system ... VDO as example I saw notes about failed attempt ...
but it should not have touched that volume... just used one it needed and
igored rest.
2) HCI wizard bug of ignoring user set /dev/sd?  for each server again, was
another failure attempt where clean up may not have run. (noted this in
posting about manual edit .. and apply button :P to ingest)
3) HCI wizard bug of name I was using of device ID vs /sd?  which is IMAO
... bad form.. but name too long.. again. another cleanup where things may
not have fully cleaned.. or I forgot to click clean ...  where system was
left in non-pristine state
2) HCI wizard does NOT clean itself up properly if it fails ... or when I
ran clean up, maybe it did not complete and I closed wizard which then
created this orphaned state.
3) HCI Setup and post setup needs to add filtering


  With a perfect and pristine process  .. it ran.   But only when all other
learning and requirements to get it just right were setup first.  oVirt HCI
is S very close to being a great platform , well thought out and
production class.  Just needs some more nerds beating on it to find these
cracks, and get the GUI and setup polished.

My $0.002


On Mon, Sep 21, 2020 at 8:06 AM Nir Soffer  wrote:

> On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise 
> wrote:
> >
> >
> >
> >
> >
> >
> > vdo: ERROR - Device /dev/sdc excluded by a filter
> >
> >
> >
> >
> >
> > Other server
> >
> > vdo: ERROR - Device
> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
> excluded by a filter.
> >
> >
> >
> > All systems when I go to create VDO volume on blank drives.. I get this
> filter error.  All disk outside of the HCI wizard setup are now blocked
> from creating new Gluster volume group.
> >
> > Here is what I see in /dev/lvm/lvm.conf |grep filter
> > [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
> > filter =
> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|",
> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|",
> "r|.*|"]
>
> This filter is correct for a normal oVirt host. But gluster wants to
> use more local disks,
> so you should:
>
> 1. remove the lvm filter
> 2. configure gluster
> 3. create the lvm filter
>
> This will create a filter including all the mounted log

[ovirt-users] Re: oVirt - KVM QCow2 Import

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 9:11 AM Jeremey Wise  wrote:
>
>
> I rebuilt my lab environment.   And their are four or five VMs that really 
> would help if I did not have to rebuild.
>
> oVirt as I am now finding when it creates infrastructure, sets it out such 
> that I cannot just use older  means of placing .qcow2 files in folders and 
> .xml files in other folders and they show up on services restarting.
>
> How do I import VMs from files?

You did not share the oVirt version, so I'm assuming 4.4.

The simplest way is to upload the qcow2 images to oVirt, and create a new
VM with the new disk.

On the hypervisor where the files are located, install the required packages:

dnf install ovirt-imageio-client python3-ovirt-engine-sdk4

And upload the image:

python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py \
--engine-url https://my.engine/ \
--username admin@internal \
--password-file /path/to/password/file \
--cafile /path/to/cafile \
--sd-name my-storage-domain \
--disk-sparse \
/path/to/image.qcow2

This will upload the file in qcow2 format to whatever type of storage you
have. You can change the format if you like using --disk-format. See --help
for all the options.

We also support importing from libvirt, but for this you need to have the vm
defined in libvirt. If you don't have this, It will probably be easier to upload
the images and create a new vm in oVirt.

Nir

> I found this article but implies VM is running: 
> https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt.html
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/administration_guide/sect-adding_external_providers#Adding_KVM_as_an_External_Provider
>
> I need a way to import a file.  Even if it means temporarily hosting on "KVM 
> on one of the hosts to then bring in once it is up.
>
>
> Thanks
> --
>
> penguinpages
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6LSE4MNEBGODIRPVAQCUNBO2KGCCQTM5/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H25R25XLNHEP2EQQ4X62PESPRXUTGX4Y/


[ovirt-users] Re: Question on "Memory" column/field in Virtual Machines list/table in ovirt GUI

2020-09-21 Thread KISHOR K
Hi,

I think I already checked that.
What I meant (since beginning) was that ovirt is reporting memory usage in GUI 
same way regardless of CentOS or SLES in our case.
My main question is why ovirt is reporting memory usage percentage based on 
"free" memory but not actually based on "available memory", which is basically 
sum of "free" and "buff/cache". 
Buffer/Cache is a temporary memory and that's anyhow gets released for new 
processes and applications. 
That means that, ovirt should actually consider the actual available memory 
left and report usage accordingly in GUI but what we see now is different 
behavior.
I was very worried when I saw the memory usage as 98% and highlighted in red 
for many of VMs in GUI. But, when I checked the actual used memory by VM, it's 
always below 50%.

Could you clarify how can this behavior from ovirt be OS specific?

I hope I explained the issue clearly or let me know if it is still unclear.
Thanks in advance.

/Kishore
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HAJ6TT74U33FAFIJTXTYZHVHYKKSWMN7/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Nir Soffer
On Mon, Sep 21, 2020 at 9:02 AM Jeremey Wise  wrote:
>
>
>
>
>
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
>
>
>
>
> Other server
>
> vdo: ERROR - Device 
> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>  excluded by a filter.
>
>
>
> All systems when I go to create VDO volume on blank drives.. I get this 
> filter error.  All disk outside of the HCI wizard setup are now blocked from 
> creating new Gluster volume group.
>
> Here is what I see in /dev/lvm/lvm.conf |grep filter
> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
> filter = 
> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|", 
> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|", 
> "r|.*|"]

This filter is correct for a normal oVirt host. But gluster wants to
use more local disks,
so you should:

1. remove the lvm filter
2. configure gluster
3. create the lvm filter

This will create a filter including all the mounted logical volumes
created by gluster.

Can you explain how do you reproduce this?

The lvm filter is created when you add a host to engine. Did you add the host
to engine before configuring gluster? Or maybe you are trying to add a host that
was used previously by oVirt?

In the last case, removing the filter before installing gluster will
fix the issue.

Nir

> [root@odin ~]# ls -al /dev/disk/by-id/
> total 0
> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
> lrwxrwxrwx. 1 root root9 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root9 Sep 18 14:32 
> ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root9 Sep 18 22:40 
> ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT 
> -> ../../dm-1
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU 
> -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r 
> -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9 
> -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L 
> -> ../../dm-11
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40 
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz 
> -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35 
> dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49 
> dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
> dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 
> lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40 
> lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
>  -> ../../nvme0n1
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32 
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001-part1
>  -> ../../nvme0n1p1
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32 
> nvme-SPCC_M.2_PCIe_SSD_AA002458 -> ../../nvme0n1
> lrwxrwxrwx. 1 root ro

[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Vojtech Juranek
On sobota 19. září 2020 5:58:43 CEST Jeremey Wise wrote:
> [image: image.png]
> 
> vdo: ERROR - Device /dev/sdc excluded by a filter
> 
> [image: image.png]

when this error happens? When you install ovirt HCI?


> Where is getting this filter.
> I have done gdisk /dev/sdc ( new 1TB Drive) and shows no partition.  I even
> did a full dd if=/dev/zero   and no change.

it's installed by vdsm to exclude ovirt devices from common use



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OK6TE456D37ZHGVT7XBDFLKLT6EH4HAM/


[ovirt-users] Re: Cannot import VM disks from previously detached storage domain

2020-09-21 Thread Eyal Shenitzky
Hi Stranhil,

Maybe those VMs has more disks on different data storage domains?
If so, those VMs will remain on the environment with the disks that are not
based on the detached storage-domain.

You can try to import the VM as partial, another option is to remove the VM
that remained in the environment but
keep the disks so you will be able to import the VM and attach the disks to
it.

On Sat, 19 Sep 2020 at 15:49, Strahil Nikolov via Users 
wrote:

> Hello All,
>
> I would like to ask how to proceed further.
>
> Here is what I have done so far on my ovirt 4.3.10:
> 1. Set in maintenance and detached my Gluster-based storage domain
> 2. Did some maintenance on the gluster
> 3. Reattached and activated my Gluster-based storage domain
> 4. I have imported my ISOs via the Disk Import tab in UI
>
> Next I tried to import the VM Disks , but they are unavailable in the disk
> tab
> So I tried to import the VM:
> 1. First try - import with partial -> failed due to MAC conflict
> 2. Second try - import with partial , allow MAC reassignment -> failed as
> VM id exists -> recommends to remove the original VM
> 3. I tried to detach the VMs disks , so I can delete it - but this is not
> possible as the Vm already got a snapshot.
>
>
> What is the proper way to import my non-OS disks (data domain is slower
> but has more space which is more suitable for "data") ?
>
>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/WTJXOIVDWU6DGVZQQ243VKGWJLPKHR4L/
>


-- 
Regards,
Eyal Shenitzky
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NGMSYDEYCGYW43OHMBRI46WWVREOUQYE/


[ovirt-users] Re: in ovirt-engine all host show the status at "non-responsive"

2020-09-21 Thread Yedidyah Bar David
On Mon, Sep 21, 2020 at 11:18 AM momokch--- via Users  wrote:
>
> What you have done???
> i just regenerate the ovirt-engine cert according to the link below
> https://lists.ovirt.org/pipermail/users/2014-April/023402.html
>
>
> # cp -a /etc/pki/ovirt-engine "/etc/pki/ovirt-engine.$(date
> "+%Y%m%d")"
> # SUBJECT="$(openssl x509 -subject -noout -in 
> /etc/pki/ovirt-engine/certs/apache.cer
> | sed 's/subject= //')"
> # /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=apache
> --password="@PASSWORD@" --subject="${SUBJECT}"
> # openssl pkcs12 -passin "pass:@PASSWORD@" -nokeys -in
> /etc/pki/ovirt-engine/keys/apache.p12 > /etc/pki/ovirt-engine/certs/apache.cer
> # openssl pkcs12 -passin "pass:@PASSWORD@" -nocerts -nodes -in
> /etc/pki/ovirt-engine/keys/apache.p12 > 
> /etc/pki/ovirt-engine/keys/apache.key.nopass
> # chmod 0600 /etc/pki/ovirt-engine/keys/apache.key.nopass
>
>
>
> What version is this? 4.0.6.3-1el7.centos
>
>  When was it set up? 15,NOV, 2016

What happens if you try to Activate them?

Please check/share /var/log/ovirt-engine/engine.log. Search there for
"nonresponsive" (case insensitive) and ' ERROR '.
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZX7MCFBBO5X2CKJWWV3QW4AVYKILGMM4/


[ovirt-users] Re: in ovirt-engine all host show the status at "non-responsive"

2020-09-21 Thread momokch--- via Users
What you have done???
i just regenerate the ovirt-engine cert according to the link below
https://lists.ovirt.org/pipermail/users/2014-April/023402.html


# cp -a /etc/pki/ovirt-engine "/etc/pki/ovirt-engine.$(date
"+%Y%m%d")"
# SUBJECT="$(openssl x509 -subject -noout -in 
/etc/pki/ovirt-engine/certs/apache.cer
| sed 's/subject= //')"
# /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh --name=apache
--password="@PASSWORD@" --subject="${SUBJECT}"
# openssl pkcs12 -passin "pass:@PASSWORD@" -nokeys -in
/etc/pki/ovirt-engine/keys/apache.p12 > /etc/pki/ovirt-engine/certs/apache.cer
# openssl pkcs12 -passin "pass:@PASSWORD@" -nocerts -nodes -in
/etc/pki/ovirt-engine/keys/apache.p12 > 
/etc/pki/ovirt-engine/keys/apache.key.nopass
# chmod 0600 /etc/pki/ovirt-engine/keys/apache.key.nopass



What version is this? 4.0.6.3-1el7.centos

 When was it set up? 15,NOV, 2016

is it any method which no need to stop all VMs, because they are running as the 
service

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DNCNCXUV256PKJ2A3SD2X22QC2UHMRRP/


[ovirt-users] Re: oVirt - vdo: ERROR - Device /dev/sd excluded by a filter

2020-09-21 Thread Parth Dhanjal
Hey!

Can you try editing the lvm cache filter and including sdc multipath into
the filter?
I see that it is missing, and hence the error that sdc is excluded.
Add "a|^/dev/sdc$|" to the lvmfilter and try again.

Thanks

On Mon, Sep 21, 2020 at 11:34 AM Jeremey Wise 
wrote:

>
>
>
> [image: image.png]
>
> vdo: ERROR - Device /dev/sdc excluded by a filter
>
> [image: image.png]
>
>
> Other server
> vdo: ERROR - Device
> /dev/mapper/nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
> excluded by a filter.
>
>
> All systems when I go to create VDO volume on blank drives.. I get this
> filter error.  All disk outside of the HCI wizard setup are now blocked
> from creating new Gluster volume group.
>
> Here is what I see in /dev/lvm/lvm.conf |grep filter
> [root@odin ~]# cat /etc/lvm/lvm.conf |grep filter
> filter =
> ["a|^/dev/disk/by-id/lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC$|",
> "a|^/dev/disk/by-id/lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1$|",
> "r|.*|"]
>
> [root@odin ~]# ls -al /dev/disk/by-id/
> total 0
> drwxr-xr-x. 2 root root 1220 Sep 18 14:32 .
> drwxr-xr-x. 6 root root  120 Sep 18 14:32 ..
> lrwxrwxrwx. 1 root root9 Sep 18 22:40
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> ata-INTEL_SSDSC2BB080G4_BTWL40350DXP080KGN-part2 -> ../../sda2
> lrwxrwxrwx. 1 root root9 Sep 18 14:32
> ata-Micron_1100_MTFDDAV512TBN_17401F699137 -> ../../sdb
> lrwxrwxrwx. 1 root root9 Sep 18 22:40
> ata-WDC_WDS100T2B0B-00YS70_183533804564 -> ../../sdc
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-home -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-root -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40 dm-name-cl-swap -> ../../dm-1
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-name-gluster_vg_sdb-gluster_lv_data -> ../../dm-11
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-name-gluster_vg_sdb-gluster_lv_engine -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-name-gluster_vg_sdb-gluster_lv_vmstore -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49
> dm-name-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001p1
> -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32 dm-name-vdo_sdb -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADc49gc6PWLRBCoJ2B3JC9tDJejyx5eDPT
> -> ../../dm-1
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADOMNJfgcat9ZLOpcNO7FyG8ixcl5s93TU
> -> ../../dm-2
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-GpvYIuypEfrR7nEDn5uHPenKwjrsn4ADzqPGk0yTQ19FIqgoAfsCxWg7cDMtl71r
> -> ../../dm-0
> lrwxrwxrwx. 1 root root   10 Sep 18 16:40
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOq6Om5comvRFWJDbtVZAKtE5YGl4jciP9
> -> ../../dm-6
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOqVheASEgerWSEIkjM1BR3us3D9ekHt0L
> -> ../../dm-11
> lrwxrwxrwx. 1 root root   11 Sep 18 16:40
> dm-uuid-LVM-ikNfztYY7KGT1SI2WYXPz4DhM2cyTelOQz6vXuivIfup6cquKAjPof8wIGOSe4Vz
> -> ../../dm-12
> lrwxrwxrwx. 1 root root   10 Sep 18 23:35
> dm-uuid-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../dm-3
> lrwxrwxrwx. 1 root root   10 Sep 18 23:49
> dm-uuid-part1-mpath-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../dm-4
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32
> dm-uuid-VDO-472035cc-8d2b-40ac-afe9-fa60b62a887f -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 14:32
> lvm-pv-uuid-e1fvwo-kEfX-v3lT-SKBp-cgze-TwsO-PtyvmC -> ../../dm-5
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> lvm-pv-uuid-mr9awW-oQH5-F4IX-CbEO-RgJZ-x4jK-e4YZS1 -> ../../sda2
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001
> -> ../../nvme0n1
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32
> nvme-nvme.126f-4141303030303030303030303030303032343538-53504343204d2e32205043496520535344-0001-part1
> -> ../../nvme0n1p1
> lrwxrwxrwx. 1 root root   13 Sep 18 14:32
> nvme-SPCC_M.2_PCIe_SSD_AA002458 -> ../../nvme0n1
> lrwxrwxrwx. 1 root root   15 Sep 18 14:32
> nvme-SPCC_M.2_PCIe_SSD_AA002458-part1 -> ../../nvme0n1p1
> lrwxrwxrwx. 1 root root9 Sep 18 22:40
> scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN -> ../../sda
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> scsi-0ATA_INTEL_SSDSC2BB08_BTWL40350DXP080KGN-part1 -> ../../sda1
> lrwxrwxrwx. 1 root root   10 Sep 18 22:40
> scsi-0ATA_INTEL_SSDSC2BB08