[ovirt-users] VDSM ovirt-node-2 command Get Host Capabilities failed: Internal JSON-RPC error: {'reason': "invalid argument: KVM is not supported by '/usr/libexec/qemu-kvm' on this host"}

2019-08-08 Thread wangyu13476969128
The version of ovirt-engine  is 4.3.5.5-1.el7


The version of ovirt-node-2  is 4.3.5.2-1.el7

When I add ovirt-node-2 to ovirt-engine ,it report:
VDSM ovirt-node-2 command Get Host Capabilities failed: Internal JSON-RPC 
error: {'reason': "invalid argument: KVM is not supported by 
'/usr/libexec/qemu-kvm' on this host"}

What 's the root cause of this problem ?  How can I solve this problem ?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O4FTJ7F5JVMMJB622WZWE5FAU2UYBJG4/


[ovirt-users] Re: Ovirt 4.3.5.4-1.el7 noVNC keeps disconnecting with 1006

2019-08-08 Thread Strahil
Hi Ryan,

I'm using noVNC and not regular VNC.

Last login: Thu Aug  8 16:40:50 2019 from ovirt1.localdomain
[root@engine ~]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: ssh dhcpv6-client ovirt-postgres ovirt-https 
ovn-central-firewall-service ovirt-fence-kdump-listener ovirt-imageio-proxy 
ovirt-websocket-proxy ovirt-http ovirt-vmconsole-proxy ovirt-provider-ovn smtp
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

Best Regards,
Strahil Nikolov
On Aug 8, 2019 14:13, Ryan Barry wrote: > > On Thu, Aug 8, 2019 at 4:26 AM 
Sandro Bonazzola wrote: > > > > > > > > Il giorno dom 4 ago 2019 alle ore 16:11 
Strahil Nikolov ha scritto: > >> > >> Hello Community, > >> > >> did anyone 
experience disconnects after a minute or 2 (seems random ,but I will check it 
out)  with error code 1006 ? > >> Can someone with noVNC reproduce that 
behaviour ? > >> > >> As I manage to connect, it seems strange to me to loose 
connection like that. The VM was not migrated - so it should be something else. 
> > > > Can you please post firewall details and console.vv file? It should > 
not be possible to connect with the firewall in the way, but I would > wonder 
if something is changing it from behind > > > > > @Ryan Barry , @Michal 
Skrivanek any clue? > > > >> > >> > >> Best Regards, > >> Strahil Nikolov > >> 
___ > >> Users mailing list -- 
users@ovirt.org > >> To unsubscribe send an email to users-le...@ovirt.org > >> 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > >> oVirt Code 
of Conduct: https://www.ovirt.org/community/about/community-guidelines/ > >> 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVBK5NWKAHXH2KREVRSVES3U75ZDQ34L/
 > > > > > > > > -- > > > > Sandro Bonazzola > > > > MANAGER, SOFTWARE 
ENGINEERING, EMEA R RHV > > > > Red Hat EMEA > > > > sbona...@redhat.com > > 
> > Red Hat respects your work life balance. Therefore there is no need to 
answer this email out of your office hours. > > > > -- > > Ryan Barry > > 
Associate Manager - RHV Virt/SLA > > rba...@redhat.com    M: +16518159306 
IM: rbarry > ___ > Users mailing 
list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org 
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of 
Conduct: https://www.ovirt.org/community/about/community-guidelines/ > List 
Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YONQ3MYAIGPKFJJE3BNXABI3FDX6VWSL/___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WCPPH4PSST2YWUMCX4SXCLJDWCTT6NR/


[ovirt-users] Re: RFE: Add the ability to the engine to serve as a fencing proxy

2019-08-08 Thread Strahil
I think poison pill-based  fencing is easier  to implement but it requires 
either  Network-based  (iSCSI or NFS)  or FC-based  shared  storage.

It is used  in corosync/pacemaker clusters and is easier to implement.

Best Regards,
Strahil Nikolov


On Aug 8, 2019 11:29, Sandro Bonazzola  wrote:
>
>
>
> Il giorno ven 2 ago 2019 alle ore 10:50 Sandro E  ha 
> scritto:
>>
>> Hi,
>>
>> i hope that this hits the right people i found  an RFE (Bug 1373957) which 
>> would be a realy nice feature for my company as we have to request firewall 
>> rules for every new host and this ends up in a lot of mess and work. Is 
>> there any change that this RFE gets implemented ? 
>>
>> Thanks for any help or tips 
>
>
> This RFE has been filed in 2016 and didn't got much interest so far. Can you 
> elaborate a bit on the user story for this?
>
>
>  
>>
>>
>> BR,
>> Sandro
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UP7NZWXZBNHM7B7MNY5NMCAUK6UBPXXD/
>
>
>
> -- 
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA
>
> sbona...@redhat.com   
>
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3IV6FFQHV2MJUXUPET5BXVBXX2J4P7Q/


[ovirt-users] Re: Problem with cloud-init (metrics install)

2019-08-08 Thread Chris Adams
How do you keep it from reverting back to DHCP on the next reboot?

Once upon a time, Jayme  said:
> I found this a bit confusing myself.  I ended up having to do it manually
> by logging in to the VM and changing the IP afterward.
> 
> On Thu, Aug 8, 2019 at 11:21 AM Chris Adams  wrote:
> 
> > I'm following this guide:
> >
> >
> > https://ovirt.org/documentation/metrics-install-guide/Installing_Metrics_Store.html
> >
> > Specifically, the step "Setup virtual machine static IP and Mac
> > address".  The deploy does a bunch of stuff automatically, so the first
> > opportunity I have to do anything is after the VM is already booted.
> >
> > It seems that having a DHCP server, with matching reverse/forward DNS
> > entries for each IP, is a requirement, and that there's not a way to set
> > the metrics store VM to a static IP (despite having to have a DNS entry
> > pointing to an IP).
> >
> > Once upon a time, Liran Rotenberg  said:
> > > Hi Chris,
> > > Run Once option is different from normal run.
> > > For cloud-init you shall need the pre-requirement:
> > > A sealed VM, for example if you wish to create a template:
> > >
> > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/chap-templates#Sealing_Virtual_Machines_in_Preparation_for_Deployment_as_Templates
> > > Cloud-init service should be installed and enabled on the VM (make
> > > sure before sealing the VM).
> > > Run will consume the cloud-init configuration only if it is the VM's
> > first run.
> > >
> > > Regards,
> > > Liran.
> > >
> > > On Thu, Aug 8, 2019 at 4:07 PM Chris Adams  wrote:
> > > >
> > > > I am trying to set up the oVirt Metrics Store, which uses cloud-init
> > for
> > > > network settings, so I set the info under the "Initial Run" tab.
> > > > However, it doesn't seem to actually apply the network settings unless
> > I
> > > > "run once" and enable clout-init there.
> > > >
> > > > I haven't used cloud-init before (been on my to-do list to check out) -
> > > > am I missing something?
> > > >
> > > > --
> > > > Chris Adams 
> > > > ___
> > > > Users mailing list -- users@ovirt.org
> > > > To unsubscribe send an email to users-le...@ovirt.org
> > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > > > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKBKKLQQBFDNSVEIKETOD5GQPVVX2LBT/
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TAATB64XMPFMBV3TGO6BZOQ3RNGX7Q6A/
> >
> > --
> > Chris Adams 
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKZJG2AXVMWDXIT4R65DQ2BJI3OZF3OQ/
> >

> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDRDSQQ4NJRN36WTKEKAJLQLOMK6B5FG/


-- 
Chris Adams 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HDNTABMK42QW27BBX7FNEISEPF7YEWCH/


[ovirt-users] Re: Error creating local storage domain: Internal Engine Error.

2019-08-08 Thread Gobinda Das
I think Storage format V5 is supported from ovirt-4.3.3.2  and vdsm-4.30.10
onwards.

On Thu, Aug 8, 2019 at 8:49 PM  wrote:

> Hi.
>
> I have opened a bug as suggested.
>
> Bug 1739134
>
> Thanks for your support.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7BZZY5UV72Y43UE7FTNEBTJYYFFKVVC/
>


-- 


Thanks,
Gobinda
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RNUUY77UF7PAOG64DDGHZD2PJXTXSMGN/


[ovirt-users] Re: Problem with cloud-init (metrics install)

2019-08-08 Thread Jayme
I found this a bit confusing myself.  I ended up having to do it manually
by logging in to the VM and changing the IP afterward.

On Thu, Aug 8, 2019 at 11:21 AM Chris Adams  wrote:

> I'm following this guide:
>
>
> https://ovirt.org/documentation/metrics-install-guide/Installing_Metrics_Store.html
>
> Specifically, the step "Setup virtual machine static IP and Mac
> address".  The deploy does a bunch of stuff automatically, so the first
> opportunity I have to do anything is after the VM is already booted.
>
> It seems that having a DHCP server, with matching reverse/forward DNS
> entries for each IP, is a requirement, and that there's not a way to set
> the metrics store VM to a static IP (despite having to have a DNS entry
> pointing to an IP).
>
> Once upon a time, Liran Rotenberg  said:
> > Hi Chris,
> > Run Once option is different from normal run.
> > For cloud-init you shall need the pre-requirement:
> > A sealed VM, for example if you wish to create a template:
> >
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/chap-templates#Sealing_Virtual_Machines_in_Preparation_for_Deployment_as_Templates
> > Cloud-init service should be installed and enabled on the VM (make
> > sure before sealing the VM).
> > Run will consume the cloud-init configuration only if it is the VM's
> first run.
> >
> > Regards,
> > Liran.
> >
> > On Thu, Aug 8, 2019 at 4:07 PM Chris Adams  wrote:
> > >
> > > I am trying to set up the oVirt Metrics Store, which uses cloud-init
> for
> > > network settings, so I set the info under the "Initial Run" tab.
> > > However, it doesn't seem to actually apply the network settings unless
> I
> > > "run once" and enable clout-init there.
> > >
> > > I haven't used cloud-init before (been on my to-do list to check out) -
> > > am I missing something?
> > >
> > > --
> > > Chris Adams 
> > > ___
> > > Users mailing list -- users@ovirt.org
> > > To unsubscribe send an email to users-le...@ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKBKKLQQBFDNSVEIKETOD5GQPVVX2LBT/
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TAATB64XMPFMBV3TGO6BZOQ3RNGX7Q6A/
>
> --
> Chris Adams 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKZJG2AXVMWDXIT4R65DQ2BJI3OZF3OQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UDRDSQQ4NJRN36WTKEKAJLQLOMK6B5FG/


[ovirt-users] Re: Error creating local storage domain: Internal Engine Error.

2019-08-08 Thread christian_barr
Hi.

I have opened a bug as suggested.

Bug 1739134 

Thanks for your support.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W7BZZY5UV72Y43UE7FTNEBTJYYFFKVVC/


[ovirt-users] Re: ovirt-web-ui-1.5.3: immediate logout in VM portal

2019-08-08 Thread Matthias Leopold

Thank you very much!

I indeed set this to "-1" in the past and forgot about it. Now 
everything works as expected.


Matthias

Am 08.08.19 um 16:13 schrieb Scott Dickerson:

 From the browser console log:
"09:54:32.004  debug  http GET[7] -> url: 
"/ovirt-engine/api/options/UserSessionTimeOutInterval", headers: 
{"Accept":"application/json","Authorization":"*","Accept-Language":"en_US","Filter":true} 
transport.js:74:9"
"09:54:32.141  debug  Reducing action: 
{"type":"SET_USER_SESSION_TIMEOUT_INTERVAL","payload":{"userSessionTimeoutInterval":-1}} 
utils.js:48:13"


Your engine "UserSessionTimeOutInterval" is set to -1.  VM Portal is 
interpreting this as "auto-logout a second ago" instead of "do not 
auto-logout".


The simple fix is to set that value to something >0 in your engine configs.

I filed https://github.com/oVirt/ovirt-web-ui/issues/1085 to account for 
a -1 value properly in VM Portal.



On Thu, Aug 8, 2019 at 4:07 AM Matthias Leopold 
> wrote:




Am 08.08.19 um 07:49 schrieb Scott Dickerson:
 >
 >
 > On Wed, Aug 7, 2019 at 11:06 AM Sharon Gratch mailto:sgra...@redhat.com>
 > >> wrote:
 >
 >     Hi,
 >     @Scott Dickerson >,  the session logout
 >     issue for VM portal 1.5.3 was handled in the following PRs:
 > https://github.com/oVirt/ovirt-web-ui/pull/1014
 > https://github.com/oVirt/ovirt-web-ui/pull/1025
 >
 >     Any idea on what can be the problem?
 >
 >
 > That is very strange.  We saw a problem similar to that where, when
 > web-ui is starting up, the time it took for the app to fetch the
 > "UserSessionTimeOutInterval" config value was longer than the
time it
 > took to load the auto-logout component.  In that case the value was
 > considered to be 0 and auto logged the user out right away.  That
issue
 > was dealt with in PR 1025 and the whole login data load process was
 > synchronized properly in PR 1049.
 >
 > I need some additonal info:
 >    - The browser console logs from when the page loads to when
they're
 > logged out
 >    - the "yum info ovirt-web-ui"
 >
 > I'll be able to better triage the problem with that info.
 >

Thanks to all for replies. I sent the requested info directly to Scott
Dickerson.

Matthias



--
Scott Dickerson
Senior Software Engineer
RHV-M Engineering - UX Team
Red Hat, Inc


--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQMMGJMJ7AFJGJHSJ7HVOTUCLG6CV6QL/


[ovirt-users] Re: VM --- is not responding.

2019-08-08 Thread Sandro Bonazzola
Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza 
ha scritto:

> Hi all,
> It is more days that for same vm I received this error, but I don't
> underdand why.
> The traffic of the virtual machine is not excessive, cpu and ram to, but
> for few minutes the vm is not responding. and in the messages log file of
> the vm I received the error under, yo can help me?
> thanks
>

can you check the S.M.A.R.T. health status of the disks?



> Edoardo
> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
> [kworker/2:0:26227]
> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables
> ip6table_filter ip6_tables iptable_filter snd_hda_c
> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel
> snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
> snd_timer snd soundcore virtio_rng sg virtio_balloon
> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
> virtio_net virtio_console virtio_scsi ata_generic p
> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
> ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
> dm_mirror dm_region_hash dm_log dm_mod
> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump:
> loaded Tainted: G L    3.10.0-957.12.1.el7.x86_64 #1
> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
> 1.11.0-2.el7 04/01/2014
> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable
> disk_events_workfn
> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti:
> 9e27b161 task.ti: 9e27b161
> Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[]
>  [] _raw_spin_unlock_irqrestore+0x15/0x20
> Aug  8 02:51:14 vmmysql kernel: RSP: :9e27b1613a68  EFLAGS:
> 0286
> Aug  8 02:51:14 vmmysql kernel: RAX: 0001 RBX:
> 9e27b1613a10 RCX: 9e27b72a3d05
> Aug  8 02:51:14 vmmysql kernel: RDX: 9e27b729a420 RSI:
> 0286 RDI: 0286
> Aug  8 02:51:14 vmmysql kernel: RBP: 9e27b1613a68 R08:
> 0001 R09: 9e25b67fc198
> Aug  8 02:51:14 vmmysql kernel: R10: 9e27b45bd8d8 R11:
>  R12: 9e25b67fde80
> Aug  8 02:51:14 vmmysql kernel: R13: 9e25b67fc000 R14:
> 9e25b67fc158 R15: c032f8e0
> Aug  8 02:51:14 vmmysql kernel: FS:  ()
> GS:9e27b728() knlGS:
> Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS:  ES:  CR0:
> 80050033
> Aug  8 02:51:14 vmmysql kernel: CR2: 7f0c9e9b6008 CR3:
> 00023248 CR4: 003606e0
> Aug  8 02:51:14 vmmysql kernel: DR0:  DR1:
>  DR2: 
> Aug  8 02:51:14 vmmysql kernel: DR3:  DR6:
> fffe0ff0 DR7: 0400
> Aug  8 02:51:14 vmmysql kernel: Call Trace:
> Aug  8 02:51:14 vmmysql kernel: []
> ata_scsi_queuecmd+0x155/0x450 [libata]
> Aug  8 02:51:14 vmmysql kernel: [] ?
> ata_scsiop_inq_std+0xf0/0xf0 [libata]
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_dispatch_cmd+0xb0/0x240
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_request_fn+0x4cc/0x680
> Aug  8 02:51:14 vmmysql kernel: []
> __blk_run_queue+0x39/0x50
> Aug  8 02:51:14 vmmysql kernel: []
> blk_execute_rq_nowait+0xb5/0x170
> Aug  8 02:51:14 vmmysql kernel: []
> blk_execute_rq+0x8b/0x150
> Aug  8 02:51:14 vmmysql kernel: [] ?
> bio_phys_segments+0x19/0x20
> Aug  8 02:51:14 vmmysql kernel: [] ?
> blk_rq_bio_prep+0x31/0xb0
> Aug  8 02:51:14 vmmysql kernel: [] ?
> blk_rq_map_kern+0xc7/0x180
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_execute+0xd3/0x170
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_execute_req_flags+0x8e/0x100
> Aug  8 02:51:14 vmmysql kernel: []
> sr_check_events+0xbc/0x2d0 [sr_mod]
> Aug  8 02:51:14 vmmysql kernel: []
> cdrom_check_events+0x1e/0x40 [cdrom]
> Aug  8 02:51:14 vmmysql kernel: []
> sr_block_check_events+0xb1/0x120 [sr_mod]
> Aug  8 02:51:14 vmmysql kernel: []
> disk_check_events+0x66/0x190
> Aug  8 02:51:14 vmmysql kernel: []
> disk_events_workfn+0x16/0x20
> Aug  8 02:51:14 vmmysql kernel: []
> process_one_work+0x17f/0x440
> Aug  8 02:51:14 vmmysql kernel: []
> worker_thread+0x126/0x3c0
> Aug  8 02:51:14 vmmysql kernel: [] ?
> manage_workers.isra.25+0x2a0/0x2a0
> Aug  8 02:51:14 vmmysql kernel: [] kthread+0xd1/0xe0
> Aug  8 02:51:14 vmmysql kernel: [] ?
> 

[ovirt-users] Re: Error creating local storage domain: Internal Engine Error.

2019-08-08 Thread Shani Leviim
Hi Barman,
It seems that for local DC, the storage type selected is the latest one
(v5) by default, and this one should be changed.

Please open a bug in the Bugzilla for that: https://bugzilla.redhat.com/,
and reply back its track id.


*Regards,*

*Shani Leviim*


On Thu, Aug 8, 2019 at 3:36 PM Shani Leviim  wrote:

> Hi Barman,
> Can you please attach a full engine log?
> Also attaching a screenshot would be great.
>
>
> *Regards,*
>
> *Shani Leviim*
>
>
> On Thu, Aug 8, 2019 at 2:44 PM  wrote:
>
>> Hello.
>>
>> I'm new to ovirt and trying to set up a sandbox on an old Dell
>> workstation I have.  Any help greatly appreciated.
>>
>> I have created a 4.2 Compatible DC and Cluster.  I'm able to add the host
>> and that checks in OK.
>> Its an older system hence going with 4.2 for processor support.
>>
>> When I try to a local storage domain, it fails.
>>
>> The error returned to the screen is :  Error while executing action New
>> Local Storage Domain: Internal Engine Error
>>
>> The process gets as far as creating some files and directories in the
>> directory I'm trying to configure as a local storage domain.
>>
>> I notice in the UI, the format of the domain is specified as v4.  (This
>> option is greyed out and I cannot modify it).
>>
>> This is an excerpt of the engine.log, with the error the first line of
>> which seems to be indicating its trying to use a v5 format .
>>
>>
>> 019-08-07 23:21:24,618+01 WARN
>> [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
>> (default task-56) [67e87701] Validation of action
>> 'AttachStorageDomainToPool' failed for user SYSTEM. Reasons:
>> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ATTACH,ACTION_TYPE_FAILED_STORAGE_DOMAIN_FORMAT_ILLEGAL,$storageFormat
>> V5
>> 2019-08-07 23:21:24,620+01 INFO
>> [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
>> (default task-56) [67e87701] Lock freed to object
>> 'EngineLock:{exclusiveLocks='[f2858a80-4730-40f8-b417-50d65503dcae=STORAGE]',
>> sharedLocks=''}'
>> 2019-08-07 23:21:24,623+01 INFO
>> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
>> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
>> DELETED_OR_UPDATED_ENTITY of
>> org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
>> snapshot: id=f2858a80-4730-40f8-b417-50d65503dcae.
>> 2019-08-07 23:21:24,626+01 INFO
>> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
>> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
>> NEW_ENTITY_ID of
>> org.ovirt.engine.core.common.businessentities.profiles.DiskProfile;
>> snapshot: 94c8fb2e-e17a-46dd-a859-f3e18d9b3de7.
>> 2019-08-07 23:21:24,627+01 INFO
>> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
>> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
>> NEW_ENTITY_ID of
>> org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
>> snapshot: f2858a80-4730-40f8-b417-50d65503dcae.
>> 2019-08-07 23:21:24,627+01 INFO
>> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
>> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
>> NEW_ENTITY_ID of
>> org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
>> snapshot: f2858a80-4730-40f8-b417-50d65503dcae.
>> 2019-08-07 23:21:24,635+01 ERROR
>> [org.ovirt.engine.core.bll.storage.domain.AddLocalStorageDomainCommand]
>> (default task-56) [67e87701] Transaction rolled-back for command
>> 'org.ovirt.engine.core.bll.storage.domain.AddLocalStorageDomainCommand'.
>> 2019-08-07 23:21:24,639+01 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-56) [67e87701] EVENT_ID: USER_ADD_STORAGE_DOMAIN_FAILED(957),
>> Failed to add Storage Domain STG01. (User: admin@internal-authz)
>>
>> Thanks all.
>> Barman.
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EMNQLWPQ2BVV2E3B5MASZF6JUQPQ62PQ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z5TZHW6XOR6TQ42NV63AT4CRC36MKSPG/


[ovirt-users] Re: Problem with cloud-init (metrics install)

2019-08-08 Thread Chris Adams
I'm following this guide:

https://ovirt.org/documentation/metrics-install-guide/Installing_Metrics_Store.html

Specifically, the step "Setup virtual machine static IP and Mac
address".  The deploy does a bunch of stuff automatically, so the first
opportunity I have to do anything is after the VM is already booted.

It seems that having a DHCP server, with matching reverse/forward DNS
entries for each IP, is a requirement, and that there's not a way to set
the metrics store VM to a static IP (despite having to have a DNS entry
pointing to an IP).

Once upon a time, Liran Rotenberg  said:
> Hi Chris,
> Run Once option is different from normal run.
> For cloud-init you shall need the pre-requirement:
> A sealed VM, for example if you wish to create a template:
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/chap-templates#Sealing_Virtual_Machines_in_Preparation_for_Deployment_as_Templates
> Cloud-init service should be installed and enabled on the VM (make
> sure before sealing the VM).
> Run will consume the cloud-init configuration only if it is the VM's first 
> run.
> 
> Regards,
> Liran.
> 
> On Thu, Aug 8, 2019 at 4:07 PM Chris Adams  wrote:
> >
> > I am trying to set up the oVirt Metrics Store, which uses cloud-init for
> > network settings, so I set the info under the "Initial Run" tab.
> > However, it doesn't seem to actually apply the network settings unless I
> > "run once" and enable clout-init there.
> >
> > I haven't used cloud-init before (been on my to-do list to check out) -
> > am I missing something?
> >
> > --
> > Chris Adams 
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKBKKLQQBFDNSVEIKETOD5GQPVVX2LBT/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TAATB64XMPFMBV3TGO6BZOQ3RNGX7Q6A/

-- 
Chris Adams 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PKZJG2AXVMWDXIT4R65DQ2BJI3OZF3OQ/


[ovirt-users] Re: ovirt-web-ui-1.5.3: immediate logout in VM portal

2019-08-08 Thread Scott Dickerson
>From the browser console log:
"09:54:32.004  debug  http GET[7] -> url:
"/ovirt-engine/api/options/UserSessionTimeOutInterval", headers:
{"Accept":"application/json","Authorization":"*","Accept-Language":"en_US","Filter":true}
transport.js:74:9"
"09:54:32.141  debug  Reducing action:
{"type":"SET_USER_SESSION_TIMEOUT_INTERVAL","payload":{"userSessionTimeoutInterval":-1}}
utils.js:48:13"

Your engine "UserSessionTimeOutInterval" is set to -1.  VM Portal is
interpreting this as "auto-logout a second ago" instead of "do not
auto-logout".

The simple fix is to set that value to something >0 in your engine configs.

I filed https://github.com/oVirt/ovirt-web-ui/issues/1085 to account for a
-1 value properly in VM Portal.


On Thu, Aug 8, 2019 at 4:07 AM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

>
>
> Am 08.08.19 um 07:49 schrieb Scott Dickerson:
> >
> >
> > On Wed, Aug 7, 2019 at 11:06 AM Sharon Gratch  > > wrote:
> >
> > Hi,
> > @Scott Dickerson ,  the session logout
> > issue for VM portal 1.5.3 was handled in the following PRs:
> > https://github.com/oVirt/ovirt-web-ui/pull/1014
> > https://github.com/oVirt/ovirt-web-ui/pull/1025
> >
> > Any idea on what can be the problem?
> >
> >
> > That is very strange.  We saw a problem similar to that where, when
> > web-ui is starting up, the time it took for the app to fetch the
> > "UserSessionTimeOutInterval" config value was longer than the time it
> > took to load the auto-logout component.  In that case the value was
> > considered to be 0 and auto logged the user out right away.  That issue
> > was dealt with in PR 1025 and the whole login data load process was
> > synchronized properly in PR 1049.
> >
> > I need some additonal info:
> >- The browser console logs from when the page loads to when they're
> > logged out
> >- the "yum info ovirt-web-ui"
> >
> > I'll be able to better triage the problem with that info.
> >
>
> Thanks to all for replies. I sent the requested info directly to Scott
> Dickerson.
>
> Matthias
>


-- 
Scott Dickerson
Senior Software Engineer
RHV-M Engineering - UX Team
Red Hat, Inc
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3KEXAGYAUZ6ROTDODD2X2T3IRMAPPCZA/


[ovirt-users] Re: Problem with cloud-init (metrics install)

2019-08-08 Thread Liran Rotenberg
Hi Chris,
Run Once option is different from normal run.
For cloud-init you shall need the pre-requirement:
A sealed VM, for example if you wish to create a template:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/chap-templates#Sealing_Virtual_Machines_in_Preparation_for_Deployment_as_Templates
Cloud-init service should be installed and enabled on the VM (make
sure before sealing the VM).
Run will consume the cloud-init configuration only if it is the VM's first run.

Regards,
Liran.

On Thu, Aug 8, 2019 at 4:07 PM Chris Adams  wrote:
>
> I am trying to set up the oVirt Metrics Store, which uses cloud-init for
> network settings, so I set the info under the "Initial Run" tab.
> However, it doesn't seem to actually apply the network settings unless I
> "run once" and enable clout-init there.
>
> I haven't used cloud-init before (been on my to-do list to check out) -
> am I missing something?
>
> --
> Chris Adams 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKBKKLQQBFDNSVEIKETOD5GQPVVX2LBT/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TAATB64XMPFMBV3TGO6BZOQ3RNGX7Q6A/


[ovirt-users] Problem with cloud-init (metrics install)

2019-08-08 Thread Chris Adams
I am trying to set up the oVirt Metrics Store, which uses cloud-init for
network settings, so I set the info under the "Initial Run" tab.
However, it doesn't seem to actually apply the network settings unless I
"run once" and enable clout-init there.

I haven't used cloud-init before (been on my to-do list to check out) -
am I missing something?

-- 
Chris Adams 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKBKKLQQBFDNSVEIKETOD5GQPVVX2LBT/


[ovirt-users] Re: Error creating local storage domain: Internal Engine Error.

2019-08-08 Thread Shani Leviim
Hi Barman,
Can you please attach a full engine log?
Also attaching a screenshot would be great.


*Regards,*

*Shani Leviim*


On Thu, Aug 8, 2019 at 2:44 PM  wrote:

> Hello.
>
> I'm new to ovirt and trying to set up a sandbox on an old Dell workstation
> I have.  Any help greatly appreciated.
>
> I have created a 4.2 Compatible DC and Cluster.  I'm able to add the host
> and that checks in OK.
> Its an older system hence going with 4.2 for processor support.
>
> When I try to a local storage domain, it fails.
>
> The error returned to the screen is :  Error while executing action New
> Local Storage Domain: Internal Engine Error
>
> The process gets as far as creating some files and directories in the
> directory I'm trying to configure as a local storage domain.
>
> I notice in the UI, the format of the domain is specified as v4.  (This
> option is greyed out and I cannot modify it).
>
> This is an excerpt of the engine.log, with the error the first line of
> which seems to be indicating its trying to use a v5 format .
>
>
> 019-08-07 23:21:24,618+01 WARN
> [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
> (default task-56) [67e87701] Validation of action
> 'AttachStorageDomainToPool' failed for user SYSTEM. Reasons:
> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ATTACH,ACTION_TYPE_FAILED_STORAGE_DOMAIN_FORMAT_ILLEGAL,$storageFormat
> V5
> 2019-08-07 23:21:24,620+01 INFO
> [org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
> (default task-56) [67e87701] Lock freed to object
> 'EngineLock:{exclusiveLocks='[f2858a80-4730-40f8-b417-50d65503dcae=STORAGE]',
> sharedLocks=''}'
> 2019-08-07 23:21:24,623+01 INFO
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
> DELETED_OR_UPDATED_ENTITY of
> org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
> snapshot: id=f2858a80-4730-40f8-b417-50d65503dcae.
> 2019-08-07 23:21:24,626+01 INFO
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
> NEW_ENTITY_ID of
> org.ovirt.engine.core.common.businessentities.profiles.DiskProfile;
> snapshot: 94c8fb2e-e17a-46dd-a859-f3e18d9b3de7.
> 2019-08-07 23:21:24,627+01 INFO
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
> NEW_ENTITY_ID of
> org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
> snapshot: f2858a80-4730-40f8-b417-50d65503dcae.
> 2019-08-07 23:21:24,627+01 INFO
> [org.ovirt.engine.core.bll.CommandCompensator] (default task-56) [67e87701]
> Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: Compensating
> NEW_ENTITY_ID of
> org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
> snapshot: f2858a80-4730-40f8-b417-50d65503dcae.
> 2019-08-07 23:21:24,635+01 ERROR
> [org.ovirt.engine.core.bll.storage.domain.AddLocalStorageDomainCommand]
> (default task-56) [67e87701] Transaction rolled-back for command
> 'org.ovirt.engine.core.bll.storage.domain.AddLocalStorageDomainCommand'.
> 2019-08-07 23:21:24,639+01 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-56) [67e87701] EVENT_ID: USER_ADD_STORAGE_DOMAIN_FAILED(957),
> Failed to add Storage Domain STG01. (User: admin@internal-authz)
>
> Thanks all.
> Barman.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EMNQLWPQ2BVV2E3B5MASZF6JUQPQ62PQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q3TWHKKRKUZEKJ3WTEIMCG5LHU3T2SGN/


[ovirt-users] Re: Agentless backup solutions

2019-08-08 Thread Strahil
I had corruption of the Hosted  Engine due to a power failiure of all hosts 
(Gluster).
My lab has no UPS.

You can create a snapshot via API , backup the snapshot and then delete it via 
the API while the VM is working (no downtime).

Still, this approach won't work for Databases as  you can create the snapahot 
in the middle of a transaction.

Best Regards,
Strahil NikolovOn Aug 7, 2019 20:33, Douglas Duckworth 
 wrote:
>
> Hi
>
>
> We are running oVirt 4.2.8.2-1.el7.  Should probably upgrade but it works.
>
>
> We are backing up the engine every day with dump going to external NFS file 
> system then onto the cloud.  For VMs we are doing backups within Linux itself 
> using a program called Restic which then sends data to cloud S3 service.  
> That runs daily as well.
>
>
> We also save all configuration data, for applications running on our VMs such 
> as Apache, etc, within Ansible.   So we can quickly recreate the VM using 
> Ansible, along with any applications, then restore any data, not saved in 
> Ansible, such as private PKI keys or PostgreSQL dump, for example, from 
> Restic.  Dockerized applications even easier.  There would be some downtime 
> to redeploy a new VM but this is acceptable given the constrains of our 
> environment.
>
>
> I am wondering under what situations has anyone experienced VM corruption?  
> This would help me determine if more effort should be invested in 
> snapshotting VMs and possibly exporting their disks.  Though as I recall 
> removing snapshots from my storage domain would require shutting down the VM, 
> right?
>
>
> -- 
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unithttps://github.com/restic/restic
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AIQCZFDOBLNGC3JGYKPXKFQ2T5MZ6CJ5/


[ovirt-users] Error creating local storage domain: Internal Engine Error.

2019-08-08 Thread christian_barr
Hello.

I'm new to ovirt and trying to set up a sandbox on an old Dell workstation I 
have.  Any help greatly appreciated.

I have created a 4.2 Compatible DC and Cluster.  I'm able to add the host and 
that checks in OK.
Its an older system hence going with 4.2 for processor support.

When I try to a local storage domain, it fails.

The error returned to the screen is :  Error while executing action New Local 
Storage Domain: Internal Engine Error

The process gets as far as creating some files and directories in the directory 
I'm trying to configure as a local storage domain.

I notice in the UI, the format of the domain is specified as v4.  (This option 
is greyed out and I cannot modify it).

This is an excerpt of the engine.log, with the error the first line of which 
seems to be indicating its trying to use a v5 format .


019-08-07 23:21:24,618+01 WARN  
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] 
(default task-56) [67e87701] Validation of action 'AttachStorageDomainToPool' 
failed for user SYSTEM. Reasons: 
VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ATTACH,ACTION_TYPE_FAILED_STORAGE_DOMAIN_FORMAT_ILLEGAL,$storageFormat
 V5
2019-08-07 23:21:24,620+01 INFO  
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand] 
(default task-56) [67e87701] Lock freed to object 
'EngineLock:{exclusiveLocks='[f2858a80-4730-40f8-b417-50d65503dcae=STORAGE]', 
sharedLocks=''}'
2019-08-07 23:21:24,623+01 INFO  [org.ovirt.engine.core.bll.CommandCompensator] 
(default task-56) [67e87701] Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: 
Compensating DELETED_OR_UPDATED_ENTITY of 
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot: 
id=f2858a80-4730-40f8-b417-50d65503dcae.
2019-08-07 23:21:24,626+01 INFO  [org.ovirt.engine.core.bll.CommandCompensator] 
(default task-56) [67e87701] Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: 
Compensating NEW_ENTITY_ID of 
org.ovirt.engine.core.common.businessentities.profiles.DiskProfile; snapshot: 
94c8fb2e-e17a-46dd-a859-f3e18d9b3de7.
2019-08-07 23:21:24,627+01 INFO  [org.ovirt.engine.core.bll.CommandCompensator] 
(default task-56) [67e87701] Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: 
Compensating NEW_ENTITY_ID of 
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot: 
f2858a80-4730-40f8-b417-50d65503dcae.
2019-08-07 23:21:24,627+01 INFO  [org.ovirt.engine.core.bll.CommandCompensator] 
(default task-56) [67e87701] Command [id=abb92524-f71a-48fa-bf49-bae1f8a80989]: 
Compensating NEW_ENTITY_ID of 
org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: 
f2858a80-4730-40f8-b417-50d65503dcae.
2019-08-07 23:21:24,635+01 ERROR 
[org.ovirt.engine.core.bll.storage.domain.AddLocalStorageDomainCommand] 
(default task-56) [67e87701] Transaction rolled-back for command 
'org.ovirt.engine.core.bll.storage.domain.AddLocalStorageDomainCommand'.
2019-08-07 23:21:24,639+01 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-56) [67e87701] EVENT_ID: USER_ADD_STORAGE_DOMAIN_FAILED(957), Failed to 
add Storage Domain STG01. (User: admin@internal-authz)

Thanks all.
Barman.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EMNQLWPQ2BVV2E3B5MASZF6JUQPQ62PQ/


[ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

2019-08-08 Thread Benny Zlotnik
this means vdsm lost connectivity to the storage, but it also looks like it
recovered eventually

On Thu, Aug 8, 2019 at 12:26 PM Vrgotic, Marko 
wrote:

> Another one that seem to be related:
>
>
>
> 2019-08-07 14:43:59,069-0700 ERROR (check/loop) [storage.Monitor] Error
> checking path 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata
> (monitor:499)
>
> Traceback (most recent call last):
>
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
> 497, in _pathChecked
>
> delay = result.delay()
>
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/check.py", line 391,
> in delay
>
> raise exception.MiscFileReadException(self.path, self.rc, self.err)
>
> MiscFileReadException: Internal file read failure:
> (u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata',
> 1, 'Read timeout')
>
> 2019-08-07 14:43:59,116-0700 WARN  (monitor/6effda5) [storage.Monitor]
> Host id for domain 6effda5e-1a0d-4312-bf93-d97fa9eb5aee was released (id:
> 1) (monitor:445)
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Wednesday, 7 August 2019 at 09:50
> *To: *"users@ovirt.org" 
> *Subject: *Re: oVirt 4.3.5 potential issue with NFS storage
>
>
>
> Log line form VDSM:
>
>
>
> “[root@ovirt-sj-05 ~]# tail -f /var/log/vdsm/vdsm.log | grep WARN
>
> 2019-08-07 09:40:03,556-0700 WARN  (check/loop) [storage.check] Checker
> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
> is blocked for 20.00 seconds (check:282)
>
> 2019-08-07 09:40:47,132-0700 WARN  (monitor/bda9727) [storage.Monitor]
> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
> 5) (monitor:445)
>
> 2019-08-07 09:44:53,564-0700 WARN  (check/loop) [storage.check] Checker
> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
> is blocked for 20.00 seconds (check:282)
>
> 2019-08-07 09:46:38,604-0700 WARN  (monitor/bda9727) [storage.Monitor]
> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
> 5) (monitor:445)”
>
>
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Wednesday, 7 August 2019 at 09:09
> *To: *"users@ovirt.org" 
> *Subject: *oVirt 4.3.5 potential issue with NFS storage
>
>
>
> Dear oVIrt,
>
>
>
> This is my third oVirt platform in the company, but first time I am seeing
> following logs:
>
>
>
> “2019-08-07 16:00:16,099Z INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed
> to object
> 'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]',
> sharedLocks=''}'
>
> 2019-08-07 16:00:25,618Z WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37723) [] domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem
> 'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
>
> 2019-08-07 16:00:40,630Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37735) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
> problem. vds: 'ovirt-sj-05.ictv.com'
>
> 2019-08-07 16:00:40,652Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
> problem. vds: 'ovirt-sj-01.ictv.com'
>
> 2019-08-07 16:00:40,652Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from
> problem. No active host in the DC is reporting it as problematic, so
> clearing the domain recovery timer.”
>
>
>
> Can you help me understanding why is this being reported?
>
>
>
> This setup is:
>
>
>
> 5HOSTS, 3 in HA
>
> SelfHostedEngine
>
> Version 4.3.5
>
> NFS based Netapp storage, version 4.1
>
> “10.210.13.64:/ovirt_hosted_engine on 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine
> type nfs4
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>
>
>
> 10.210.13.64:/ovirt_production on 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__production
> type nfs4
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>
> tmpfs on /run/user/0 type tmpfs
> (rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”
>
>
>
> First mount is SHE dedicated storage.
>
> Second mount “ovirt_produciton” is for other VM Guests.
>
>
>
> Kindly awaiting your reply.
>
>
>
> Marko Vrgotic
> ___
> Users mailing list -- 

[ovirt-users] Re: Ovirt 4.3.5.4-1.el7 noVNC keeps disconnecting with 1006

2019-08-08 Thread Ryan Barry
On Thu, Aug 8, 2019 at 4:26 AM Sandro Bonazzola  wrote:
>
>
>
> Il giorno dom 4 ago 2019 alle ore 16:11 Strahil Nikolov 
>  ha scritto:
>>
>> Hello Community,
>>
>> did anyone experience disconnects after a minute or 2 (seems random ,but I 
>> will check it out)  with error code 1006 ?
>> Can someone with noVNC reproduce that behaviour ?
>>
>> As I manage to connect, it seems strange to me to loose connection like 
>> that. The VM was not migrated - so it should be something else.
>

Can you please post firewall details and console.vv file? It should
not be possible to connect with the firewall in the way, but I would
wonder if something is changing it from behind

>
> @Ryan Barry , @Michal Skrivanek any clue?
>
>>
>>
>> Best Regards,
>> Strahil Nikolov
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVBK5NWKAHXH2KREVRSVES3U75ZDQ34L/
>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA
>
> sbona...@redhat.com
>
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.



-- 

Ryan Barry

Associate Manager - RHV Virt/SLA

rba...@redhat.comM: +16518159306 IM: rbarry
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YONQ3MYAIGPKFJJE3BNXABI3FDX6VWSL/


[ovirt-users] oVirt 4.3.5 potential issue with NFS storage

2019-08-08 Thread Vrgotic, Marko
Dear oVIrt,

This is my third oVirt platform in the company, but first time I am seeing 
following logs:

“2019-08-07 16:00:16,099Z INFO  
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]', 
sharedLocks=''}'
2019-08-07 16:00:25,618Z WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37723) [] domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem 
'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,630Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37735) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-01.ictv.com'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from 
problem. No active host in the DC is reporting it as problematic, so clearing 
the domain recovery timer.”

Can you help me understanding why is this being reported?

This setup is:

5HOSTS, 3 in HA
SelfHostedEngine
Version 4.3.5
NFS based Netapp storage, version 4.1
“10.210.13.64:/ovirt_hosted_engine on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)

10.210.13.64:/ovirt_production on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__production type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
tmpfs on /run/user/0 type tmpfs 
(rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”

First mount is SHE dedicated storage.
Second mount “ovirt_produciton” is for other VM Guests.

Kindly awaiting your reply.

Marko Vrgotic
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7VSZHQCIGUXTSM4WHCVDLXGKNYRXJPBV/


[ovirt-users] Re: [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Maton, Brett
Oops, package not signed, update to disable gpgcheck...

cat /etc/yum.repos.d/ov4.3-fix.repo
[ovirt-4.3-fix]
name=oVirt 4.3 Pre-Release Fix CentOS 7.7
baseurl=https://buildlogs.centos.org/centos/7/virt/x86_64/ovirt-4.3/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.3

exclude=python2-sanlock

On Thu, 8 Aug 2019 at 10:27, Sandro Bonazzola  wrote:

>
>
> Il giorno gio 8 ago 2019 alle ore 11:20 Maton, Brett <
> mat...@ltresources.co.uk> ha scritto:
>
>> Sure, it seems to be running now.
>>
>> For anyone else with this issue, I ended up with this addtional repo file:
>>
>> cat /etc/yum.repos.d/ov4.3-fix.repo
>> [ovirt-4.3-fix]
>> name=oVirt 4.3 Pre-Release Fix CentOS 7.7
>> baseurl=https://buildlogs.centos.org/centos/7/virt/x86_64/ovirt-4.3/
>> enabled=1
>> gpgcheck=1
>> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.3
>>
>> exclude=python2-sanlock
>>
>>
> thanks, adding it to release notes for 4.3.6 rc2 here:
> https://github.com/oVirt/ovirt-site/pull/2067
>
>
>
>>
>> On Thu, 8 Aug 2019 at 09:55, Sandro Bonazzola 
>> wrote:
>>
>>>
>>>
>>> Il giorno gio 8 ago 2019 alle ore 10:37 Maton, Brett <
>>> mat...@ltresources.co.uk> ha scritto:
>>>
 Thanks Sandro,

Run into another dependency issue though:

 Resolving Dependencies
 --> Running transaction check
 ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.1.rc1.el7 will be
 updated
 ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.2.rc2.el7 will be an
 update
 ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
 --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
 python2-sanlock-3.7.1-1.el7.x86_64
 ---> Package sanlock.x86_64 0:3.6.0-1.el7 will be updated
 ---> Package sanlock.x86_64 0:3.7.3-1.el7 will be an update
 ---> Package sanlock-lib.x86_64 0:3.6.0-1.el7 will be updated
 ---> Package sanlock-lib.x86_64 0:3.7.3-1.el7 will be an update
 ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be obsoleted
 ---> Package vdsm.x86_64 0:4.30.25-1.el7 will be updated
 ---> Package vdsm.x86_64 0:4.30.26-1.el7 will be an update
 --> Processing Dependency: sanlock-python >= 3.7.3 for package:
 vdsm-4.30.26-1.el7.x86_64
 ---> Package vdsm-api.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-api.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-client.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-client.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-common.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-common.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-gluster.x86_64 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-gluster.x86_64 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.25-1.el7 will be
 updated
 ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.26-1.el7 will be
 an update
 ---> Package vdsm-hook-fcoe.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-hook-fcoe.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-hook-openstacknet.noarch 0:4.30.25-1.el7 will be
 updated
 ---> Package vdsm-hook-openstacknet.noarch 0:4.30.26-1.el7 will be an
 update
 ---> Package vdsm-hook-vhostmd.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-hook-vhostmd.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.26-1.el7 will be an
 update
 ---> Package vdsm-http.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-http.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-jsonrpc.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-jsonrpc.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-network.x86_64 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-network.x86_64 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-python.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-python.noarch 0:4.30.26-1.el7 will be an update
 ---> Package vdsm-yajsonrpc.noarch 0:4.30.25-1.el7 will be updated
 ---> Package vdsm-yajsonrpc.noarch 0:4.30.26-1.el7 will be an update
 --> Running transaction check
 ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
 --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
 python2-sanlock-3.7.1-1.el7.x86_64
 ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be updated
 ---> Package sanlock-python.x86_64 0:3.7.3-1.el7 will be an update
 --> Finished Dependency Resolution
 Error: Package: python2-sanlock-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)

>>>
>>> this is weird, https://cbs.centos.org/koji/buildinfo?buildID=25776
>>> 3.7.1-1 shouldn't be there, it's not tagged for testing.
>>> can you please exclude python2-sanlock in your repo file?
>>>
>>> 

[ovirt-users] Re: oVirt 4.3.5.1 failed to configure management network on the host

2019-08-08 Thread Strahil
LACP supports works with 2 switches, but if you wish to aggregate all links - 
you need switch support (high-end hardware).

Best Regards,
Strahil NikolovOn Aug 6, 2019 18:08, Vincent Royer  
wrote:
>
> I also am spanned over two switches.  You can use bonding, you just can't use 
> 802.3 mode. 
>
> I have MGMT bonded to two gig switches and storage bonded to two 10g switches 
> for Gluster. Each switch has its own fw/router in HA. So we can lose either 
> switch, either router, or any single interface or cable without interruption. 
>  
>
> On Tue, Aug 6, 2019, 12:33 AM Mitja Pirih  wrote:
>>
>> On 05. 08. 2019 21:20, Vincent Royer wrote:
>> > I tried deployment of 4.3.5.1 using teams and it didn't work. I did
>> > get into the engine using the temp url on the host, but the teams
>> > showed up as individual nics.  Any changes made, like assigning a new
>> > logical network to the nic, failed and I lost connectivity.  
>> >
>> > Setup as a bond instead of team before deployment worked as expected,
>> > and the bonds showed up properly in the engine. 
>> >
>> > ymmv
>> >
>>
>> I can't use bonding, spanned over two switches.
>> Maybe there is another way to do it, but I am burned out, anybody with
>> an idea?
>>
>> The server has 4x 10Gbps nics. I need up to 20Gbps throughput in HA mode.
>>
>>
>> Thanks.
>>
>>
>>
>> Br,
>> Mitja
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFGEOKJTF3SIEOLEYPHXABAAV3HYI64Z/


[ovirt-users] Re: [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Sandro Bonazzola
Il giorno gio 8 ago 2019 alle ore 11:20 Maton, Brett <
mat...@ltresources.co.uk> ha scritto:

> Sure, it seems to be running now.
>
> For anyone else with this issue, I ended up with this addtional repo file:
>
> cat /etc/yum.repos.d/ov4.3-fix.repo
> [ovirt-4.3-fix]
> name=oVirt 4.3 Pre-Release Fix CentOS 7.7
> baseurl=https://buildlogs.centos.org/centos/7/virt/x86_64/ovirt-4.3/
> enabled=1
> gpgcheck=1
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.3
>
> exclude=python2-sanlock
>
>
thanks, adding it to release notes for 4.3.6 rc2 here:
https://github.com/oVirt/ovirt-site/pull/2067



>
> On Thu, 8 Aug 2019 at 09:55, Sandro Bonazzola  wrote:
>
>>
>>
>> Il giorno gio 8 ago 2019 alle ore 10:37 Maton, Brett <
>> mat...@ltresources.co.uk> ha scritto:
>>
>>> Thanks Sandro,
>>>
>>>Run into another dependency issue though:
>>>
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.1.rc1.el7 will be
>>> updated
>>> ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.2.rc2.el7 will be an
>>> update
>>> ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
>>> --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
>>> python2-sanlock-3.7.1-1.el7.x86_64
>>> ---> Package sanlock.x86_64 0:3.6.0-1.el7 will be updated
>>> ---> Package sanlock.x86_64 0:3.7.3-1.el7 will be an update
>>> ---> Package sanlock-lib.x86_64 0:3.6.0-1.el7 will be updated
>>> ---> Package sanlock-lib.x86_64 0:3.7.3-1.el7 will be an update
>>> ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be obsoleted
>>> ---> Package vdsm.x86_64 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm.x86_64 0:4.30.26-1.el7 will be an update
>>> --> Processing Dependency: sanlock-python >= 3.7.3 for package:
>>> vdsm-4.30.26-1.el7.x86_64
>>> ---> Package vdsm-api.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-api.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-client.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-client.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-common.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-common.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-gluster.x86_64 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-gluster.x86_64 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.25-1.el7 will be
>>> updated
>>> ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.26-1.el7 will be an
>>> update
>>> ---> Package vdsm-hook-fcoe.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-hook-fcoe.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-hook-openstacknet.noarch 0:4.30.25-1.el7 will be
>>> updated
>>> ---> Package vdsm-hook-openstacknet.noarch 0:4.30.26-1.el7 will be an
>>> update
>>> ---> Package vdsm-hook-vhostmd.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-hook-vhostmd.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-http.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-http.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-jsonrpc.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-jsonrpc.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-network.x86_64 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-network.x86_64 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-python.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-python.noarch 0:4.30.26-1.el7 will be an update
>>> ---> Package vdsm-yajsonrpc.noarch 0:4.30.25-1.el7 will be updated
>>> ---> Package vdsm-yajsonrpc.noarch 0:4.30.26-1.el7 will be an update
>>> --> Running transaction check
>>> ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
>>> --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
>>> python2-sanlock-3.7.1-1.el7.x86_64
>>> ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be updated
>>> ---> Package sanlock-python.x86_64 0:3.7.3-1.el7 will be an update
>>> --> Finished Dependency Resolution
>>> Error: Package: python2-sanlock-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
>>>
>>
>> this is weird, https://cbs.centos.org/koji/buildinfo?buildID=25776
>> 3.7.1-1 shouldn't be there, it's not tagged for testing.
>> can you please exclude python2-sanlock in your repo file?
>>
>> exclude=python2-sanlock
>>
>>
>>
>>
>>>Requires: sanlock-lib = 3.7.1-1.el7
>>>Removing: sanlock-lib-3.6.0-1.el7.x86_64 (@base)
>>>sanlock-lib = 3.6.0-1.el7
>>>Updated By: sanlock-lib-3.7.3-1.el7.x86_64 (ovirt-4.3-fix)
>>>sanlock-lib = 3.7.3-1.el7
>>>Available: sanlock-lib-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
>>>sanlock-lib = 3.7.1-1.el7
>>>Available: sanlock-lib-3.7.1-2.el7.x86_64 

[ovirt-users] Re: [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Maton, Brett
Sure, it seems to be running now.

For anyone else with this issue, I ended up with this addtional repo file:

cat /etc/yum.repos.d/ov4.3-fix.repo
[ovirt-4.3-fix]
name=oVirt 4.3 Pre-Release Fix CentOS 7.7
baseurl=https://buildlogs.centos.org/centos/7/virt/x86_64/ovirt-4.3/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-ovirt-4.3

exclude=python2-sanlock


On Thu, 8 Aug 2019 at 09:55, Sandro Bonazzola  wrote:

>
>
> Il giorno gio 8 ago 2019 alle ore 10:37 Maton, Brett <
> mat...@ltresources.co.uk> ha scritto:
>
>> Thanks Sandro,
>>
>>Run into another dependency issue though:
>>
>> Resolving Dependencies
>> --> Running transaction check
>> ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.1.rc1.el7 will be
>> updated
>> ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.2.rc2.el7 will be an
>> update
>> ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
>> --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
>> python2-sanlock-3.7.1-1.el7.x86_64
>> ---> Package sanlock.x86_64 0:3.6.0-1.el7 will be updated
>> ---> Package sanlock.x86_64 0:3.7.3-1.el7 will be an update
>> ---> Package sanlock-lib.x86_64 0:3.6.0-1.el7 will be updated
>> ---> Package sanlock-lib.x86_64 0:3.7.3-1.el7 will be an update
>> ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be obsoleted
>> ---> Package vdsm.x86_64 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm.x86_64 0:4.30.26-1.el7 will be an update
>> --> Processing Dependency: sanlock-python >= 3.7.3 for package:
>> vdsm-4.30.26-1.el7.x86_64
>> ---> Package vdsm-api.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-api.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-client.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-client.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-common.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-common.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-gluster.x86_64 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-gluster.x86_64 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.25-1.el7 will be
>> updated
>> ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.26-1.el7 will be an
>> update
>> ---> Package vdsm-hook-fcoe.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-hook-fcoe.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-hook-openstacknet.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-hook-openstacknet.noarch 0:4.30.26-1.el7 will be an
>> update
>> ---> Package vdsm-hook-vhostmd.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-hook-vhostmd.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-http.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-http.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-jsonrpc.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-jsonrpc.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-network.x86_64 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-network.x86_64 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-python.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-python.noarch 0:4.30.26-1.el7 will be an update
>> ---> Package vdsm-yajsonrpc.noarch 0:4.30.25-1.el7 will be updated
>> ---> Package vdsm-yajsonrpc.noarch 0:4.30.26-1.el7 will be an update
>> --> Running transaction check
>> ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
>> --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
>> python2-sanlock-3.7.1-1.el7.x86_64
>> ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be updated
>> ---> Package sanlock-python.x86_64 0:3.7.3-1.el7 will be an update
>> --> Finished Dependency Resolution
>> Error: Package: python2-sanlock-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
>>
>
> this is weird, https://cbs.centos.org/koji/buildinfo?buildID=25776
> 3.7.1-1 shouldn't be there, it's not tagged for testing.
> can you please exclude python2-sanlock in your repo file?
>
> exclude=python2-sanlock
>
>
>
>
>>Requires: sanlock-lib = 3.7.1-1.el7
>>Removing: sanlock-lib-3.6.0-1.el7.x86_64 (@base)
>>sanlock-lib = 3.6.0-1.el7
>>Updated By: sanlock-lib-3.7.3-1.el7.x86_64 (ovirt-4.3-fix)
>>sanlock-lib = 3.7.3-1.el7
>>Available: sanlock-lib-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
>>sanlock-lib = 3.7.1-1.el7
>>Available: sanlock-lib-3.7.1-2.el7.x86_64 (ovirt-4.3-fix)
>>sanlock-lib = 3.7.1-2.el7
>>Available: sanlock-lib-3.7.1-2.1.el7.x86_64 (ovirt-4.3-fix)
>>sanlock-lib = 3.7.1-2.1.el7
>>  You could try using --skip-broken to work around the problem
>>  You could try running: rpm -Va --nofiles --nodigest
>>
>>
>> On Thu, 

[ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

2019-08-08 Thread Vrgotic, Marko
Another one that seem to be related:

2019-08-07 14:43:59,069-0700 ERROR (check/loop) [storage.Monitor] Error 
checking path 
/rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata
 (monitor:499)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line 497, in 
_pathChecked
delay = result.delay()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/check.py", line 391, in 
delay
raise exception.MiscFileReadException(self.path, self.rc, self.err)
MiscFileReadException: Internal file read failure: 
(u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/6effda5e-1a0d-4312-bf93-d97fa9eb5aee/dom_md/metadata',
 1, 'Read timeout')
2019-08-07 14:43:59,116-0700 WARN  (monitor/6effda5) [storage.Monitor] Host id 
for domain 6effda5e-1a0d-4312-bf93-d97fa9eb5aee was released (id: 1) 
(monitor:445)

From: "Vrgotic, Marko" 
Date: Wednesday, 7 August 2019 at 09:50
To: "users@ovirt.org" 
Subject: Re: oVirt 4.3.5 potential issue with NFS storage

Log line form VDSM:

“[root@ovirt-sj-05 ~]# tail -f /var/log/vdsm/vdsm.log | grep WARN
2019-08-07 09:40:03,556-0700 WARN  (check/loop) [storage.check] Checker 
u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
 is blocked for 20.00 seconds (check:282)
2019-08-07 09:40:47,132-0700 WARN  (monitor/bda9727) [storage.Monitor] Host id 
for domain bda97276-a399-448f-9113-017972f6b55a was released (id: 5) 
(monitor:445)
2019-08-07 09:44:53,564-0700 WARN  (check/loop) [storage.check] Checker 
u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
 is blocked for 20.00 seconds (check:282)
2019-08-07 09:46:38,604-0700 WARN  (monitor/bda9727) [storage.Monitor] Host id 
for domain bda97276-a399-448f-9113-017972f6b55a was released (id: 5) 
(monitor:445)”



From: "Vrgotic, Marko" 
Date: Wednesday, 7 August 2019 at 09:09
To: "users@ovirt.org" 
Subject: oVirt 4.3.5 potential issue with NFS storage

Dear oVIrt,

This is my third oVirt platform in the company, but first time I am seeing 
following logs:

“2019-08-07 16:00:16,099Z INFO  
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]', 
sharedLocks=''}'
2019-08-07 16:00:25,618Z WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37723) [] domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem 
'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,630Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37735) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-05.ictv.com'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from problem. 
vds: 'ovirt-sj-01.ictv.com'
2019-08-07 16:00:40,652Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] 
(EE-ManagedThreadFactory-engine-Thread-37737) [] Domain 
'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from 
problem. No active host in the DC is reporting it as problematic, so clearing 
the domain recovery timer.”

Can you help me understanding why is this being reported?

This setup is:

5HOSTS, 3 in HA
SelfHostedEngine
Version 4.3.5
NFS based Netapp storage, version 4.1
“10.210.13.64:/ovirt_hosted_engine on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)

10.210.13.64:/ovirt_production on 
/rhev/data-center/mnt/10.210.13.64:_ovirt__production type nfs4 
(rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
tmpfs on /run/user/0 type tmpfs 
(rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”

First mount is SHE dedicated storage.
Second mount “ovirt_produciton” is for other VM Guests.

Kindly awaiting your reply.

Marko Vrgotic
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H4FH6GYAYLUP5OIVHUTG7JAUTOZNP7Y3/


[ovirt-users] VM --- is not responding.

2019-08-08 Thread Edoardo Mazza
Hi all,
It is more days that for same vm I received this error, but I don't
underdand why.
The traffic of the virtual machine is not excessive, cpu and ram to, but
for few minutes the vm is not responding. and in the messages log file of
the vm I received the error under, yo can help me?
thanks
Edoardo
kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
[kworker/2:0:26227]
Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables
ip6table_filter ip6_tables iptable_filter snd_hda_c
odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel
snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
 glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
snd_timer snd soundcore virtio_rng sg virtio_balloon
i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
virtio_net virtio_console virtio_scsi ata_generic p
ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
dm_mirror dm_region_hash dm_log dm_mod
Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump:
loaded Tainted: G L    3.10.0-957.12.1.el7.x86_64 #1
Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
1.11.0-2.el7 04/01/2014
Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable
disk_events_workfn
Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti: 9e27b161
task.ti: 9e27b161
Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[]
 [] _raw_spin_unlock_irqrestore+0x15/0x20
Aug  8 02:51:14 vmmysql kernel: RSP: :9e27b1613a68  EFLAGS: 0286
Aug  8 02:51:14 vmmysql kernel: RAX: 0001 RBX: 9e27b1613a10
RCX: 9e27b72a3d05
Aug  8 02:51:14 vmmysql kernel: RDX: 9e27b729a420 RSI: 0286
RDI: 0286
Aug  8 02:51:14 vmmysql kernel: RBP: 9e27b1613a68 R08: 0001
R09: 9e25b67fc198
Aug  8 02:51:14 vmmysql kernel: R10: 9e27b45bd8d8 R11: 
R12: 9e25b67fde80
Aug  8 02:51:14 vmmysql kernel: R13: 9e25b67fc000 R14: 9e25b67fc158
R15: c032f8e0
Aug  8 02:51:14 vmmysql kernel: FS:  ()
GS:9e27b728() knlGS:
Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS:  ES:  CR0:
80050033
Aug  8 02:51:14 vmmysql kernel: CR2: 7f0c9e9b6008 CR3: 00023248
CR4: 003606e0
Aug  8 02:51:14 vmmysql kernel: DR0:  DR1: 
DR2: 
Aug  8 02:51:14 vmmysql kernel: DR3:  DR6: fffe0ff0
DR7: 0400
Aug  8 02:51:14 vmmysql kernel: Call Trace:
Aug  8 02:51:14 vmmysql kernel: []
ata_scsi_queuecmd+0x155/0x450 [libata]
Aug  8 02:51:14 vmmysql kernel: [] ?
ata_scsiop_inq_std+0xf0/0xf0 [libata]
Aug  8 02:51:14 vmmysql kernel: []
scsi_dispatch_cmd+0xb0/0x240
Aug  8 02:51:14 vmmysql kernel: []
scsi_request_fn+0x4cc/0x680
Aug  8 02:51:14 vmmysql kernel: []
__blk_run_queue+0x39/0x50
Aug  8 02:51:14 vmmysql kernel: []
blk_execute_rq_nowait+0xb5/0x170
Aug  8 02:51:14 vmmysql kernel: []
blk_execute_rq+0x8b/0x150
Aug  8 02:51:14 vmmysql kernel: [] ?
bio_phys_segments+0x19/0x20
Aug  8 02:51:14 vmmysql kernel: [] ?
blk_rq_bio_prep+0x31/0xb0
Aug  8 02:51:14 vmmysql kernel: [] ?
blk_rq_map_kern+0xc7/0x180
Aug  8 02:51:14 vmmysql kernel: [] scsi_execute+0xd3/0x170
Aug  8 02:51:14 vmmysql kernel: []
scsi_execute_req_flags+0x8e/0x100
Aug  8 02:51:14 vmmysql kernel: []
sr_check_events+0xbc/0x2d0 [sr_mod]
Aug  8 02:51:14 vmmysql kernel: []
cdrom_check_events+0x1e/0x40 [cdrom]
Aug  8 02:51:14 vmmysql kernel: []
sr_block_check_events+0xb1/0x120 [sr_mod]
Aug  8 02:51:14 vmmysql kernel: []
disk_check_events+0x66/0x190
Aug  8 02:51:14 vmmysql kernel: []
disk_events_workfn+0x16/0x20
Aug  8 02:51:14 vmmysql kernel: []
process_one_work+0x17f/0x440
Aug  8 02:51:14 vmmysql kernel: []
worker_thread+0x126/0x3c0
Aug  8 02:51:14 vmmysql kernel: [] ?
manage_workers.isra.25+0x2a0/0x2a0
Aug  8 02:51:14 vmmysql kernel: [] kthread+0xd1/0xe0
Aug  8 02:51:14 vmmysql kernel: [] ?
insert_kthread_work+0x40/0x40
Aug  8 02:51:14 vmmysql kernel: []
ret_from_fork_nospec_begin+0x21/0x21
Aug  8 02:51:14 vmmysql kernel: [] ?
insert_kthread_work+0x40/0x40
Aug  8 02:51:14 vmmysql kernel: Code: 14 25 10 43 03 b9 5d c3 0f 1f 40 00
66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 ff 14 25 10 43 03
b9 48 89 f7 57 9d <0f> 1f 44 00 00 5d c3 0f 

[ovirt-users] Re: [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Sandro Bonazzola
Il giorno gio 8 ago 2019 alle ore 10:37 Maton, Brett <
mat...@ltresources.co.uk> ha scritto:

> Thanks Sandro,
>
>Run into another dependency issue though:
>
> Resolving Dependencies
> --> Running transaction check
> ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.1.rc1.el7 will be updated
> ---> Package ovirt-release43-pre.noarch 0:4.3.6-0.2.rc2.el7 will be an
> update
> ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
> --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
> python2-sanlock-3.7.1-1.el7.x86_64
> ---> Package sanlock.x86_64 0:3.6.0-1.el7 will be updated
> ---> Package sanlock.x86_64 0:3.7.3-1.el7 will be an update
> ---> Package sanlock-lib.x86_64 0:3.6.0-1.el7 will be updated
> ---> Package sanlock-lib.x86_64 0:3.7.3-1.el7 will be an update
> ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be obsoleted
> ---> Package vdsm.x86_64 0:4.30.25-1.el7 will be updated
> ---> Package vdsm.x86_64 0:4.30.26-1.el7 will be an update
> --> Processing Dependency: sanlock-python >= 3.7.3 for package:
> vdsm-4.30.26-1.el7.x86_64
> ---> Package vdsm-api.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-api.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-client.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-client.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-common.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-common.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-gluster.x86_64 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-gluster.x86_64 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.25-1.el7 will be
> updated
> ---> Package vdsm-hook-ethtool-options.noarch 0:4.30.26-1.el7 will be an
> update
> ---> Package vdsm-hook-fcoe.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-hook-fcoe.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-hook-openstacknet.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-hook-openstacknet.noarch 0:4.30.26-1.el7 will be an
> update
> ---> Package vdsm-hook-vhostmd.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-hook-vhostmd.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-http.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-http.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-jsonrpc.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-jsonrpc.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-network.x86_64 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-network.x86_64 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-python.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-python.noarch 0:4.30.26-1.el7 will be an update
> ---> Package vdsm-yajsonrpc.noarch 0:4.30.25-1.el7 will be updated
> ---> Package vdsm-yajsonrpc.noarch 0:4.30.26-1.el7 will be an update
> --> Running transaction check
> ---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
> --> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
> python2-sanlock-3.7.1-1.el7.x86_64
> ---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be updated
> ---> Package sanlock-python.x86_64 0:3.7.3-1.el7 will be an update
> --> Finished Dependency Resolution
> Error: Package: python2-sanlock-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
>

this is weird, https://cbs.centos.org/koji/buildinfo?buildID=25776
3.7.1-1 shouldn't be there, it's not tagged for testing.
can you please exclude python2-sanlock in your repo file?

exclude=python2-sanlock




>Requires: sanlock-lib = 3.7.1-1.el7
>Removing: sanlock-lib-3.6.0-1.el7.x86_64 (@base)
>sanlock-lib = 3.6.0-1.el7
>Updated By: sanlock-lib-3.7.3-1.el7.x86_64 (ovirt-4.3-fix)
>sanlock-lib = 3.7.3-1.el7
>Available: sanlock-lib-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
>sanlock-lib = 3.7.1-1.el7
>Available: sanlock-lib-3.7.1-2.el7.x86_64 (ovirt-4.3-fix)
>sanlock-lib = 3.7.1-2.el7
>Available: sanlock-lib-3.7.1-2.1.el7.x86_64 (ovirt-4.3-fix)
>sanlock-lib = 3.7.1-2.1.el7
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>
>
> On Thu, 8 Aug 2019 at 08:59, Sandro Bonazzola  wrote:
>
>>
>>
>> Il giorno gio 8 ago 2019 alle ore 09:56 Maton, Brett <
>> mat...@ltresources.co.uk> ha scritto:
>>
>>> I just tried to update my 4.6 testlab and got the following RPM
>>> dependency issue:
>>>
>>> rpm -qa ovirt-release*
>>> ovirt-release43-pre-4.3.6-0.1.rc1.el7.noarch
>>>
>>> Error encountered:
>>>
>>> yum upgrade
>>> ...
>>> Error: Package: vdsm-4.30.26-1.el7.x86_64 (ovirt-4.3-pre)
>>>Requires: sanlock-python >= 3.7.3
>>>Installed: 

[ovirt-users] Re: [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Maton, Brett
Thanks Sandro,

   Run into another dependency issue though:

Resolving Dependencies
--> Running transaction check
---> Package ovirt-release43-pre.noarch 0:4.3.6-0.1.rc1.el7 will be updated
---> Package ovirt-release43-pre.noarch 0:4.3.6-0.2.rc2.el7 will be an
update
---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
--> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
python2-sanlock-3.7.1-1.el7.x86_64
---> Package sanlock.x86_64 0:3.6.0-1.el7 will be updated
---> Package sanlock.x86_64 0:3.7.3-1.el7 will be an update
---> Package sanlock-lib.x86_64 0:3.6.0-1.el7 will be updated
---> Package sanlock-lib.x86_64 0:3.7.3-1.el7 will be an update
---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be obsoleted
---> Package vdsm.x86_64 0:4.30.25-1.el7 will be updated
---> Package vdsm.x86_64 0:4.30.26-1.el7 will be an update
--> Processing Dependency: sanlock-python >= 3.7.3 for package:
vdsm-4.30.26-1.el7.x86_64
---> Package vdsm-api.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-api.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-client.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-client.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-common.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-common.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-gluster.x86_64 0:4.30.25-1.el7 will be updated
---> Package vdsm-gluster.x86_64 0:4.30.26-1.el7 will be an update
---> Package vdsm-hook-ethtool-options.noarch 0:4.30.25-1.el7 will be
updated
---> Package vdsm-hook-ethtool-options.noarch 0:4.30.26-1.el7 will be an
update
---> Package vdsm-hook-fcoe.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-hook-fcoe.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-hook-openstacknet.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-hook-openstacknet.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-hook-vhostmd.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-hook-vhostmd.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-hook-vmfex-dev.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-http.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-http.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-jsonrpc.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-jsonrpc.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-network.x86_64 0:4.30.25-1.el7 will be updated
---> Package vdsm-network.x86_64 0:4.30.26-1.el7 will be an update
---> Package vdsm-python.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-python.noarch 0:4.30.26-1.el7 will be an update
---> Package vdsm-yajsonrpc.noarch 0:4.30.25-1.el7 will be updated
---> Package vdsm-yajsonrpc.noarch 0:4.30.26-1.el7 will be an update
--> Running transaction check
---> Package python2-sanlock.x86_64 0:3.7.1-1.el7 will be obsoleting
--> Processing Dependency: sanlock-lib = 3.7.1-1.el7 for package:
python2-sanlock-3.7.1-1.el7.x86_64
---> Package sanlock-python.x86_64 0:3.6.0-1.el7 will be updated
---> Package sanlock-python.x86_64 0:3.7.3-1.el7 will be an update
--> Finished Dependency Resolution
Error: Package: python2-sanlock-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
   Requires: sanlock-lib = 3.7.1-1.el7
   Removing: sanlock-lib-3.6.0-1.el7.x86_64 (@base)
   sanlock-lib = 3.6.0-1.el7
   Updated By: sanlock-lib-3.7.3-1.el7.x86_64 (ovirt-4.3-fix)
   sanlock-lib = 3.7.3-1.el7
   Available: sanlock-lib-3.7.1-1.el7.x86_64 (ovirt-4.3-fix)
   sanlock-lib = 3.7.1-1.el7
   Available: sanlock-lib-3.7.1-2.el7.x86_64 (ovirt-4.3-fix)
   sanlock-lib = 3.7.1-2.el7
   Available: sanlock-lib-3.7.1-2.1.el7.x86_64 (ovirt-4.3-fix)
   sanlock-lib = 3.7.1-2.1.el7
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest


On Thu, 8 Aug 2019 at 08:59, Sandro Bonazzola  wrote:

>
>
> Il giorno gio 8 ago 2019 alle ore 09:56 Maton, Brett <
> mat...@ltresources.co.uk> ha scritto:
>
>> I just tried to update my 4.6 testlab and got the following RPM
>> dependency issue:
>>
>> rpm -qa ovirt-release*
>> ovirt-release43-pre-4.3.6-0.1.rc1.el7.noarch
>>
>> Error encountered:
>>
>> yum upgrade
>> ...
>> Error: Package: vdsm-4.30.26-1.el7.x86_64 (ovirt-4.3-pre)
>>Requires: sanlock-python >= 3.7.3
>>Installed: sanlock-python-3.6.0-1.el7.x86_64 (@base)
>>sanlock-python = 3.6.0-1.el7
>> ...
>>
>
>
> yes, as mentioned in release announce, this requires RHEL / CentOS 7.7.
> you can workaround this by adding
> https://buildlogs.centos.org/centos/7/virt/x86_64/ovirt-4.3/ repo until
> CentOS 7.7 will be released
>
>
>
>> Regards,
>> Brett
>>
>> On Thu, 8 Aug 2019 at 07:53, Sandro Bonazzola 
>> wrote:
>>
>>> The oVirt Project is pleased to announce the availability of the oVirt

[ovirt-users] Re: RFE: Add the ability to the engine to serve as a fencing proxy

2019-08-08 Thread Sandro Bonazzola
Il giorno ven 2 ago 2019 alle ore 10:50 Sandro E 
ha scritto:

> Hi,
>
> i hope that this hits the right people i found  an RFE (Bug 1373957) which
> would be a realy nice feature for my company as we have to request firewall
> rules for every new host and this ends up in a lot of mess and work. Is
> there any change that this RFE gets implemented ?
>
> Thanks for any help or tips
>

This RFE has been filed in 2016 and didn't got much interest so far. Can
you elaborate a bit on the user story for this?




>
> BR,
> Sandro
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UP7NZWXZBNHM7B7MNY5NMCAUK6UBPXXD/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TXSDDZVDBQ3E6FODOKOS4COEAT5GMJN3/


[ovirt-users] Re: Removing VDSM dependency on multipath driver.

2019-08-08 Thread Sandro Bonazzola
Il giorno dom 4 ago 2019 alle ore 18:28  ha scritto:

> Hi All,
>
> I observed that for FCP storage domain only mpath enabled drives are
> getting listed, for e.g. if we have local nvme drives attached to the VDSM
> host, only mpath enabled nvme drives are getting listed in the ovirt engine
> for FCP storage domain.
> Actually I am creating a RAID (using mdadm tool, software raid)  on two
> local nvme drives and wanted to expose it to the Ovirt engine through FCP
> storage domain, is it possible? or can we bypass multipath in FCP storage
> domain so that all the block device can be listed in the ovirt engine for
> FCP storage domain?
>

@Nir Soffer  , @Tal Nisan  can you
please follow up?


>
> Thanks,
> Amit
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T4Z3U5GDDJ4Z6TQWJPYNRGDESBA6QGEA/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F2LIILM6LHG6ORAX4N64SKLYOYZRUE7S/


[ovirt-users] Re: Ovirt 4.3.5.4-1.el7 noVNC keeps disconnecting with 1006

2019-08-08 Thread Sandro Bonazzola
Il giorno dom 4 ago 2019 alle ore 16:11 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:

> Hello Community,
>
> did anyone experience disconnects after a minute or 2 (seems random ,but I
> will check it out)  with error code 1006 ?
> Can someone with noVNC reproduce that behaviour ?
>
> As I manage to connect, it seems strange to me to loose connection like
> that. The VM was not migrated - so it should be something else.
>

@Ryan Barry  , @Michal Skrivanek  any
clue?


>
> Best Regards,
> Strahil Nikolov
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UVBK5NWKAHXH2KREVRSVES3U75ZDQ34L/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4CBQTAS2XN32FJNZERIWFOS25YZZDYE/


[ovirt-users] Re: Does cluster upgrade wait for heal before proceeding to next host?

2019-08-08 Thread Sandro Bonazzola
Il giorno mar 6 ago 2019 alle ore 23:17 Jayme  ha scritto:

> I’m aware of the heal process but it’s unclear to me if the update
> continues to run while the volumes are healing and resumes when they are
> done. There doesn’t seem to be any indication in the ui (unless I’m
> mistaken)
>

Adding @Martin Perina  , @Sahina Bose
   and @Laura Wright   on this,
hyperconverged deployments using cluster upgrade command would probably
need some improvement.



>
> On Tue, Aug 6, 2019 at 6:06 PM Robert O'Kane  wrote:
>
>> Hello,
>>
>> Often(?), updates to a hypervisor that also has (provides) a Gluster
>> brick takes the hypervisor offline (updates often require a reboot).
>>
>> This reboot then makes the brick "out of sync" and it has to be resync'd.
>>
>> I find it a "feature" than another host that is also part of a gluster
>> domain can not be updated (rebooted) before all the bricks are updated
>> in order to guarantee there is not data loss. It is called Quorum, or?
>>
>> Always let the heal process end. Then the next update can start.
>> For me there is ALWAYS a healing time before Gluster is happy again.
>>
>> Cheers,
>>
>> Robert O'Kane
>>
>>
>> Am 06.08.2019 um 16:38 schrieb Shani Leviim:
>> > Hi Jayme,
>> > I can't recall such a healing time.
>> > Can you please retry and attach the engine & vdsm logs so we'll be
>> smarter?
>> >
>> > *Regards,
>> > *
>> > *Shani Leviim
>> > *
>> >
>> >
>> > On Tue, Aug 6, 2019 at 5:24 PM Jayme > > > wrote:
>> >
>> > I've yet to have cluster upgrade finish updating my three host HCI
>> > cluster.  The most recent try was today moving from oVirt 4.3.3 to
>> > 4.3.5.5.  The first host updates normally, but when it moves on to
>> > the second host it fails to put it in maintenance and the cluster
>> > upgrade stops.
>> >
>> > I suspect this is due to that fact that after my hosts are updated
>> > it takes 10 minutes or more for all volumes to sync/heal.  I have
>> > 2Tb SSDs.
>> >
>> > Does the cluster upgrade process take heal time in to account before
>> > attempting to place the next host in maintenance to upgrade it? Or
>> > is there something else that may be at fault here, or perhaps a
>> > reason why the heal process takes 10 minutes after reboot to
>> complete?
>> > ___
>> > Users mailing list -- users@ovirt.org 
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > 
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> > https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> >
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5XM3QB3364ZYIPAKY4KTTOSJZMCWHUPD/
>> >
>> >
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GBX3L23MWGMTF7Q4KGVR63RIQZFYXGWK/
>> >
>>
>> --
>> Systems Administrator
>> Kunsthochschule für Medien Köln
>> Peter-Welter-Platz 2
>> 50676 Köln
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBAHFFFTDOI7LHAH5AVI5OPUQUQTABWM/
>>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/T27ROHWZPJL475HBHTFDGRBSYHJMWYDR/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H7Q2KPMZLCE6GQX2HILXG5FU2TFSJ3B2/


[ovirt-users] Re: ovirt-web-ui-1.5.3: immediate logout in VM portal

2019-08-08 Thread Matthias Leopold



Am 08.08.19 um 07:49 schrieb Scott Dickerson:



On Wed, Aug 7, 2019 at 11:06 AM Sharon Gratch > wrote:


Hi,
@Scott Dickerson ,  the session logout
issue for VM portal 1.5.3 was handled in the following PRs:
https://github.com/oVirt/ovirt-web-ui/pull/1014
https://github.com/oVirt/ovirt-web-ui/pull/1025

Any idea on what can be the problem?


That is very strange.  We saw a problem similar to that where, when 
web-ui is starting up, the time it took for the app to fetch the 
"UserSessionTimeOutInterval" config value was longer than the time it 
took to load the auto-logout component.  In that case the value was 
considered to be 0 and auto logged the user out right away.  That issue 
was dealt with in PR 1025 and the whole login data load process was 
synchronized properly in PR 1049.


I need some additonal info:
   - The browser console logs from when the page loads to when they're 
logged out

   - the "yum info ovirt-web-ui"

I'll be able to better triage the problem with that info.



Thanks to all for replies. I sent the requested info directly to Scott 
Dickerson.


Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLWLBT6KZUEEC76RCIP3QTMJOTDN4MUK/


[ovirt-users] Re: [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Sandro Bonazzola
Il giorno gio 8 ago 2019 alle ore 09:56 Maton, Brett <
mat...@ltresources.co.uk> ha scritto:

> I just tried to update my 4.6 testlab and got the following RPM dependency
> issue:
>
> rpm -qa ovirt-release*
> ovirt-release43-pre-4.3.6-0.1.rc1.el7.noarch
>
> Error encountered:
>
> yum upgrade
> ...
> Error: Package: vdsm-4.30.26-1.el7.x86_64 (ovirt-4.3-pre)
>Requires: sanlock-python >= 3.7.3
>Installed: sanlock-python-3.6.0-1.el7.x86_64 (@base)
>sanlock-python = 3.6.0-1.el7
> ...
>


yes, as mentioned in release announce, this requires RHEL / CentOS 7.7.
you can workaround this by adding
https://buildlogs.centos.org/centos/7/virt/x86_64/ovirt-4.3/ repo until
CentOS 7.7 will be released



> Regards,
> Brett
>
> On Thu, 8 Aug 2019 at 07:53, Sandro Bonazzola  wrote:
>
>> The oVirt Project is pleased to announce the availability of the oVirt
>> 4.3.6 Second Release Candidate for testing, as of August 8th, 2019.
>>
>> This update is a release candidate of the sixth in a series of
>> stabilization updates to the 4.3 series.
>> This is pre-release software. This pre-release should not to be used in
>> production.
>>
>> This release is available now on x86_64 architecture for:
>> * Red Hat Enterprise Linux 7.7 or later (but <8)
>> * CentOS Linux (or similar) 7.7 or later (but <8)
>>
>> This release supports Hypervisor Hosts on x86_64 and ppc64le
>> architectures for:
>> * Red Hat Enterprise Linux 7.7 or later (but <8)
>> * CentOS Linux (or similar) 7.7 or later (but <8)
>> * oVirt Node 4.3 (available for x86_64 only)
>>
>> See the release notes [1] for installation / upgrade instructions and a
>> list of new features and bugs fixed.
>>
>> Notes:
>> - oVirt Appliance is already available
>> - oVirt Node is not yet available, pending CentOS 7.7 release to be
>> available
>>
>> Additional Resources:
>> * Read more about the oVirt 4.3.6 release highlights:
>> http://www.ovirt.org/release/4.3.6/
>> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
>> * Check out the latest project news on the oVirt blog:
>> http://www.ovirt.org/blog/
>>
>> [1] http://www.ovirt.org/release/4.3.6/
>>
>> --
>>
>> Sandro Bonazzola
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>>
>> Red Hat EMEA 
>>
>> sbona...@redhat.com
>> *Red Hat respects your work life balance.
>> Therefore there is no need to answer this email out of your office hours.*
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6STDS7UGBADU2IR3VUIJP4KH4YIWH4HL/
>>
>

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5N4Y6ZUM53NBFIMYLLT2ESLDNWKLGKX7/


[ovirt-users] Re: [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Maton, Brett
I just tried to update my 4.6 testlab and got the following RPM dependency
issue:

rpm -qa ovirt-release*
ovirt-release43-pre-4.3.6-0.1.rc1.el7.noarch

Error encountered:

yum upgrade
...
Error: Package: vdsm-4.30.26-1.el7.x86_64 (ovirt-4.3-pre)
   Requires: sanlock-python >= 3.7.3
   Installed: sanlock-python-3.6.0-1.el7.x86_64 (@base)
   sanlock-python = 3.6.0-1.el7
...

Regards,
Brett

On Thu, 8 Aug 2019 at 07:53, Sandro Bonazzola  wrote:

> The oVirt Project is pleased to announce the availability of the oVirt
> 4.3.6 Second Release Candidate for testing, as of August 8th, 2019.
>
> This update is a release candidate of the sixth in a series of
> stabilization updates to the 4.3 series.
> This is pre-release software. This pre-release should not to be used in
> production.
>
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
>
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
> for:
> * Red Hat Enterprise Linux 7.7 or later (but <8)
> * CentOS Linux (or similar) 7.7 or later (but <8)
> * oVirt Node 4.3 (available for x86_64 only)
>
> See the release notes [1] for installation / upgrade instructions and a
> list of new features and bugs fixed.
>
> Notes:
> - oVirt Appliance is already available
> - oVirt Node is not yet available, pending CentOS 7.7 release to be
> available
>
> Additional Resources:
> * Read more about the oVirt 4.3.6 release highlights:
> http://www.ovirt.org/release/4.3.6/
> * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
> * Check out the latest project news on the oVirt blog:
> http://www.ovirt.org/blog/
>
> [1] http://www.ovirt.org/release/4.3.6/
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> *Red Hat respects your work life balance.
> Therefore there is no need to answer this email out of your office hours.*
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6STDS7UGBADU2IR3VUIJP4KH4YIWH4HL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GTBIIECFS4JHK5YHRHM4R6NJPB36LGKB/


[ovirt-users] Re: oVirt 4.3.5 potential issue with NFS storage

2019-08-08 Thread Shani Leviim
Hi,
Can you please clarify the flow you're doing?
Also, can you please attach full vdsm and engine logs?


*Regards,*

*Shani Leviim*


On Thu, Aug 8, 2019 at 6:25 AM Vrgotic, Marko 
wrote:

> Log line form VDSM:
>
>
>
> “[root@ovirt-sj-05 ~]# tail -f /var/log/vdsm/vdsm.log | grep WARN
>
> 2019-08-07 09:40:03,556-0700 WARN  (check/loop) [storage.check] Checker
> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
> is blocked for 20.00 seconds (check:282)
>
> 2019-08-07 09:40:47,132-0700 WARN  (monitor/bda9727) [storage.Monitor]
> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
> 5) (monitor:445)
>
> 2019-08-07 09:44:53,564-0700 WARN  (check/loop) [storage.check] Checker
> u'/rhev/data-center/mnt/10.210.13.64:_ovirt__production/bda97276-a399-448f-9113-017972f6b55a/dom_md/metadata'
> is blocked for 20.00 seconds (check:282)
>
> 2019-08-07 09:46:38,604-0700 WARN  (monitor/bda9727) [storage.Monitor]
> Host id for domain bda97276-a399-448f-9113-017972f6b55a was released (id:
> 5) (monitor:445)”
>
>
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Wednesday, 7 August 2019 at 09:09
> *To: *"users@ovirt.org" 
> *Subject: *oVirt 4.3.5 potential issue with NFS storage
>
>
>
> Dear oVIrt,
>
>
>
> This is my third oVirt platform in the company, but first time I am seeing
> following logs:
>
>
>
> “2019-08-07 16:00:16,099Z INFO
> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-51) [1b85e637] Lock freed
> to object
> 'EngineLock:{exclusiveLocks='[2350ee82-94ed-4f90-9366-451e0104d1d6=PROVIDER]',
> sharedLocks=''}'
>
> 2019-08-07 16:00:25,618Z WARN
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37723) [] domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' in problem
> 'PROBLEMATIC'. vds: 'ovirt-sj-05.ictv.com'
>
> 2019-08-07 16:00:40,630Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37735) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
> problem. vds: 'ovirt-sj-05.ictv.com'
>
> 2019-08-07 16:00:40,652Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' recovered from
> problem. vds: 'ovirt-sj-01.ictv.com'
>
> 2019-08-07 16:00:40,652Z INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
> (EE-ManagedThreadFactory-engine-Thread-37737) [] Domain
> 'bda97276-a399-448f-9113-017972f6b55a:ovirt_production' has recovered from
> problem. No active host in the DC is reporting it as problematic, so
> clearing the domain recovery timer.”
>
>
>
> Can you help me understanding why is this being reported?
>
>
>
> This setup is:
>
>
>
> 5HOSTS, 3 in HA
>
> SelfHostedEngine
>
> Version 4.3.5
>
> NFS based Netapp storage, version 4.1
>
> “10.210.13.64:/ovirt_hosted_engine on 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__hosted__engine
> type nfs4
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>
>
>
> 10.210.13.64:/ovirt_production on 
> /rhev/data-center/mnt/10.210.13.64:_ovirt__production
> type nfs4
> (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.210.11.14,local_lock=none,addr=10.210.13.64)
>
> tmpfs on /run/user/0 type tmpfs
> (rw,nosuid,nodev,relatime,seclabel,size=9878396k,mode=700)”
>
>
>
> First mount is SHE dedicated storage.
>
> Second mount “ovirt_produciton” is for other VM Guests.
>
>
>
> Kindly awaiting your reply.
>
>
>
> Marko Vrgotic
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICRKHD3GXTPQEZN2T6LJBS6YIVLER6TP/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4VNZFLBYJUS3LM3JHGRIE7BFHWJ47DLF/


[ovirt-users] [ANN] oVirt 4.3.6 Second Release Candidate is now available for testing

2019-08-08 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.6 Second Release Candidate for testing, as of August 8th, 2019.

This update is a release candidate of the sixth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.

This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only)

See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.

Notes:
- oVirt Appliance is already available
- oVirt Node is not yet available, pending CentOS 7.7 release to be
available

Additional Resources:
* Read more about the oVirt 4.3.6 release highlights:
http://www.ovirt.org/release/4.3.6/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.3.6/

-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com
*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6STDS7UGBADU2IR3VUIJP4KH4YIWH4HL/