[ovirt-users] Re: VM --- is not responding.

2019-08-26 Thread Edoardo Mazza
...I saw the logs, the storage of domain is glusterfs but I didn't find any
error in gluster's log when the vm became unresponsive, it would be very
strange if the problem was in the storage controller, may be that the vm is too
used?
thanks
Edoardo

Il giorno mer 14 ago 2019 alle ore 18:47 Strahil  ha
scritto:

> Hm... It was supposes to show controller status.
> Maybe the hpssacli you have is not supporting your raid cards.Check for
> newer version on HPE's support page.
>
> Best Regards,
> Strahil Nikolov
>
> Best Regards,
> Strahil Nikolov
> On Aug 14, 2019 11:40, Edoardo Mazza  wrote:
>
> I installed hpssacli-2.40-13.0.x86_64.rpm and the result of "hpssacli ctrl
> all show status" is:
> Error: No controllers detected. Possible causes:.
> The s.o. run on sd cards and the vm runs on array on traditional disk
> thanks
> Edoardo
>
> Il giorno lun 12 ago 2019 alle ore 05:59 Strahil 
> ha scritto:
>
> Would you check the health status of the controllers :
> hpssacli ctrl all show status
>
> Best Regards,
> Strahil Nikolov
> On Aug 11, 2019 09:55, Edoardo Mazza  wrote:
>
> The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a
> SR Gen10 like controller and the other host with
> HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is
> gluster and the last host is the arbiter in the gluster enviroment.
> The S.M.A.R.T. healt status is ok for all host
> Edoardo
>
>
>
>
>
> Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>
>
> Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza 
> ha scritto:
>
> Hi all,
> It is more days that for same vm I received this error, but I don't
> underdand why.
> The traffic of the virtual machine is not excessive, cpu and ram to, but
> for few minutes the vm is not responding. and in the messages log file of
> the vm I received the error under, yo can help me?
> thanks
>
>
> can you check the S.M.A.R.T. health status of the disks?
>
>
>
> Edoardo
> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
> [kworker/2:0:26227]
> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables
> ip6table_filter ip6_tables iptable_filter snd_hda_c
> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel
> snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
> snd_timer snd soundcore virtio_rng sg virtio_balloon
> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
> virtio_net virtio_console virtio_scsi ata_generic p
> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
> ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
> dm_mirror dm_region_hash dm_log dm_mod
> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump:
> loaded Tainted: G L    3.10.0-957.12.1.el7.x86_64 #1
> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
> 1.11.0-2.el7 04/01/2014
> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable
> disk_events_workfn
> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti:
> 9e27b161 task.ti: 9e27b161
> Aug  8 02:51:14 vmmysql kernel: RIP: 00
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVWJYZROBHCMKNVBZ2FZ75D3CV735MVY/


[ovirt-users] Re: VM --- is not responding.

2019-08-14 Thread Strahil
Hm... It was supposes to show controller status.
Maybe the hpssacli you have is not supporting your raid cards.Check for newer 
version on HPE's support page.

Best Regards,
Strahil Nikolov

Best Regards,
Strahil NikolovOn Aug 14, 2019 11:40, Edoardo Mazza  wrote:
>
> I installed hpssacli-2.40-13.0.x86_64.rpm and the result of "hpssacli ctrl 
> all show status" is:
> Error: No controllers detected. Possible causes:.
> The s.o. run on sd cards and the vm runs on array on traditional disk 
> thanks 
> Edoardo
>
> Il giorno lun 12 ago 2019 alle ore 05:59 Strahil  ha 
> scritto:
>>
>> Would you check the health status of the controllers :
>> hpssacli ctrl all show status
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Aug 11, 2019 09:55, Edoardo Mazza  wrote:
>>>
>>> The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a 
>>> SR Gen10 like controller and the other host with
>>> HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is 
>>> gluster and the last host is the arbiter in the gluster enviroment. 
>>> The S.M.A.R.T. healt status is ok for all host
>>> Edoardo
>>>
>>>
>>>
>>>
>>>
>>> Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola 
>>>  ha scritto:



 Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza  
 ha scritto:
>
> Hi all,
> It is more days that for same vm I received this error, but I don't 
> underdand why.
> The traffic of the virtual machine is not excessive, cpu and ram to, but 
> for few minutes the vm is not responding. and in the messages log file of 
> the vm I received the error under, yo can help me?
> thanks


 can you check the S.M.A.R.T. health status of the disks? 

  
>
> Edoardo
> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s! 
> [kworker/2:0:26227]
> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc 
> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp 
> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat 
> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter 
> ebtables ip6table_filter ip6_tables iptable_filter snd_hda_c
> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel 
> snd_hda_intel snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm 
> snd_timer snd soundcore virtio_rng sg virtio_balloon 
> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom 
> virtio_net virtio_console virtio_scsi ata_generic p
> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl 
> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm 
> drm ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring 
> virtio dm_mirror dm_region_hash dm_log dm_mod
> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 
> Kdump: loaded Tainted: G             L    
> 3.10.0-957.12.1.el7.x86_64 #1
> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS 
> 1.11.0-2.el7 04/01/2014
> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable 
> disk_events_workfn
> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti: 
> 9e27b161 task.ti: 9e27b161
> Aug  8 02:51:14 vmmysql kernel: RIP: 00___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFSJR5NWFKCGOW5Q4KPTYBG5A5I4EMBZ/


[ovirt-users] Re: VM --- is not responding.

2019-08-14 Thread Edoardo Mazza
I installed hpssacli-2.40-13.0.x86_64.rpm and the result of "hpssacli ctrl
all show status" is:
Error: No controllers detected. Possible causes:.
The s.o. run on sd cards and the vm runs on array on traditional disk
thanks
Edoardo

Il giorno lun 12 ago 2019 alle ore 05:59 Strahil  ha
scritto:

> Would you check the health status of the controllers :
> hpssacli ctrl all show status
>
> Best Regards,
> Strahil Nikolov
> On Aug 11, 2019 09:55, Edoardo Mazza  wrote:
>
> The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a
> SR Gen10 like controller and the other host with
> HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is
> gluster and the last host is the arbiter in the gluster enviroment.
> The S.M.A.R.T. healt status is ok for all host
> Edoardo
>
>
>
>
>
> Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>
>
> Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza 
> ha scritto:
>
> Hi all,
> It is more days that for same vm I received this error, but I don't
> underdand why.
> The traffic of the virtual machine is not excessive, cpu and ram to, but
> for few minutes the vm is not responding. and in the messages log file of
> the vm I received the error under, yo can help me?
> thanks
>
>
> can you check the S.M.A.R.T. health status of the disks?
>
>
>
> Edoardo
> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
> [kworker/2:0:26227]
> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables
> ip6table_filter ip6_tables iptable_filter snd_hda_c
> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel
> snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
> snd_timer snd soundcore virtio_rng sg virtio_balloon
> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
> virtio_net virtio_console virtio_scsi ata_generic p
> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
> ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
> dm_mirror dm_region_hash dm_log dm_mod
> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump:
> loaded Tainted: G L    3.10.0-957.12.1.el7.x86_64 #1
> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
> 1.11.0-2.el7 04/01/2014
> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable
> disk_events_workfn
> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti:
> 9e27b161 task.ti: 9e27b161
> Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[]
>  [] _raw_spin_unlock_irqrestore+0x15/0x20
> Aug  8 02:51:14 vmmysql kernel: RSP: :9e27b1613a68  EFLAGS:
> 0286
> Aug  8 02:51:14 vmmysql kernel: RAX: 0001 RBX:
> 9e27b1613a10 RCX: 9e27b72a3d05
> Aug  8 02:51:14 vmmysql kernel: RDX: 9e27b729a420 RSI:
> 0286 RDI: 0286
> Aug  8 02:51:14 vmmysql kernel: RBP: 9e27b1613a68 R08:
> 0001 R09: 9e25b67fc198
> Aug  8 02:51:14 vmmysql kernel: R10: 9e27b45bd8d8 R11:
>  R12: 9e25b67fde80
> Aug  8 02:51:14 vmmysql kernel: R13: 9e25b67fc000 R14:
> 9e25b67fc158 R15: c032f8e0
> Aug  8 02:51:14 vmmysql kernel: FS:  ()
> GS:9e27b728() knlGS:
> Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS:  ES:  CR0:
> 80050033
> Aug  8 02:51:14 vmmysql kernel: CR2: 7f0c9e9b6008 CR3: 00
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TWRBFKXBMHNCVMRXQV6PB3AKDI2VL2P/


[ovirt-users] Re: VM --- is not responding.

2019-08-11 Thread Strahil
Would you check the health status of the controllers :
hpssacli ctrl all show status

Best Regards,
Strahil NikolovOn Aug 11, 2019 09:55, Edoardo Mazza  wrote:
>
> The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a SR 
> Gen10 like controller and the other host with
> HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is 
> gluster and the last host is the arbiter in the gluster enviroment. 
> The S.M.A.R.T. healt status is ok for all host
> Edoardo
>
>
>
>
>
> Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola 
>  ha scritto:
>>
>>
>>
>> Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza  ha 
>> scritto:
>>>
>>> Hi all,
>>> It is more days that for same vm I received this error, but I don't 
>>> underdand why.
>>> The traffic of the virtual machine is not excessive, cpu and ram to, but 
>>> for few minutes the vm is not responding. and in the messages log file of 
>>> the vm I received the error under, yo can help me?
>>> thanks
>>
>>
>> can you check the S.M.A.R.T. health status of the disks? 
>>
>>  
>>>
>>> Edoardo
>>> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s! 
>>> [kworker/2:0:26227]
>>> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc 
>>> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
>>> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp 
>>> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
>>> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat 
>>> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
>>> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables 
>>> ip6table_filter ip6_tables iptable_filter snd_hda_c
>>> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel 
>>> snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>>>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm 
>>> snd_timer snd soundcore virtio_rng sg virtio_balloon 
>>> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
>>> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom 
>>> virtio_net virtio_console virtio_scsi ata_generic p
>>> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl 
>>> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm 
>>> ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio 
>>> dm_mirror dm_region_hash dm_log dm_mod
>>> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump: 
>>> loaded Tainted: G             L    3.10.0-957.12.1.el7.x86_64 #1
>>> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS 
>>> 1.11.0-2.el7 04/01/2014
>>> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable 
>>> disk_events_workfn
>>> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti: 9e27b161 
>>> task.ti: 9e27b161
>>> Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[]  
>>> [] _raw_spin_unlock_irqrestore+0x15/0x20
>>> Aug  8 02:51:14 vmmysql kernel: RSP: :9e27b1613a68  EFLAGS: 0286
>>> Aug  8 02:51:14 vmmysql kernel: RAX: 0001 RBX: 9e27b1613a10 
>>> RCX: 9e27b72a3d05
>>> Aug  8 02:51:14 vmmysql kernel: RDX: 9e27b729a420 RSI: 0286 
>>> RDI: 0286
>>> Aug  8 02:51:14 vmmysql kernel: RBP: 9e27b1613a68 R08: 0001 
>>> R09: 9e25b67fc198
>>> Aug  8 02:51:14 vmmysql kernel: R10: 9e27b45bd8d8 R11:  
>>> R12: 9e25b67fde80
>>> Aug  8 02:51:14 vmmysql kernel: R13: 9e25b67fc000 R14: 9e25b67fc158 
>>> R15: c032f8e0
>>> Aug  8 02:51:14 vmmysql kernel: FS:  () 
>>> GS:9e27b728() knlGS:
>>> Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS:  ES:  CR0: 
>>> 80050033
>>> Aug  8 02:51:14 vmmysql kernel: CR2: 7f0c9e9b6008 CR3: 00___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UA57S3XIF42DQBYVEHVEEBOA6EWSMB4G/


[ovirt-users] Re: VM --- is not responding.

2019-08-10 Thread Edoardo Mazza
The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a
SR Gen10 like controller and the other host with
HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is
gluster and the last host is the arbiter in the gluster enviroment.
The S.M.A.R.T. healt status is ok for all host
Edoardo





Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:

>
>
> Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza 
> ha scritto:
>
>> Hi all,
>> It is more days that for same vm I received this error, but I don't
>> underdand why.
>> The traffic of the virtual machine is not excessive, cpu and ram to, but
>> for few minutes the vm is not responding. and in the messages log file of
>> the vm I received the error under, yo can help me?
>> thanks
>>
>
> can you check the S.M.A.R.T. health status of the disks?
>
>
>
>> Edoardo
>> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
>> [kworker/2:0:26227]
>> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
>> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
>> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
>> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
>> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
>> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
>> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter
>> ebtables ip6table_filter ip6_tables iptable_filter snd_hda_c
>> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel
>> snd_hda_intel snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
>> snd_timer snd soundcore virtio_rng sg virtio_balloon
>> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
>> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
>> virtio_net virtio_console virtio_scsi ata_generic p
>> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
>> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
>> ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
>> dm_mirror dm_region_hash dm_log dm_mod
>> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0
>> Kdump: loaded Tainted: G L 
>> 3.10.0-957.12.1.el7.x86_64 #1
>> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
>> 1.11.0-2.el7 04/01/2014
>> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable
>> disk_events_workfn
>> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti:
>> 9e27b161 task.ti: 9e27b161
>> Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[]
>>  [] _raw_spin_unlock_irqrestore+0x15/0x20
>> Aug  8 02:51:14 vmmysql kernel: RSP: :9e27b1613a68  EFLAGS:
>> 0286
>> Aug  8 02:51:14 vmmysql kernel: RAX: 0001 RBX:
>> 9e27b1613a10 RCX: 9e27b72a3d05
>> Aug  8 02:51:14 vmmysql kernel: RDX: 9e27b729a420 RSI:
>> 0286 RDI: 0286
>> Aug  8 02:51:14 vmmysql kernel: RBP: 9e27b1613a68 R08:
>> 0001 R09: 9e25b67fc198
>> Aug  8 02:51:14 vmmysql kernel: R10: 9e27b45bd8d8 R11:
>>  R12: 9e25b67fde80
>> Aug  8 02:51:14 vmmysql kernel: R13: 9e25b67fc000 R14:
>> 9e25b67fc158 R15: c032f8e0
>> Aug  8 02:51:14 vmmysql kernel: FS:  ()
>> GS:9e27b728() knlGS:
>> Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS:  ES:  CR0:
>> 80050033
>> Aug  8 02:51:14 vmmysql kernel: CR2: 7f0c9e9b6008 CR3:
>> 00023248 CR4: 003606e0
>> Aug  8 02:51:14 vmmysql kernel: DR0:  DR1:
>>  DR2: 
>> Aug  8 02:51:14 vmmysql kernel: DR3:  DR6:
>> fffe0ff0 DR7: 0400
>> Aug  8 02:51:14 vmmysql kernel: Call Trace:
>> Aug  8 02:51:14 vmmysql kernel: []
>> ata_scsi_queuecmd+0x155/0x450 [libata]
>> Aug  8 02:51:14 vmmysql kernel: [] ?
>> ata_scsiop_inq_std+0xf0/0xf0 [libata]
>> Aug  8 02:51:14 vmmysql kernel: []
>> scsi_dispatch_cmd+0xb0/0x240
>> Aug  8 02:51:14 vmmysql kernel: []
>> scsi_request_fn+0x4cc/0x680
>> Aug  8 02:51:14 vmmysql kernel: []
>> __blk_run_queue+0x39/0x50
>> Aug  8 02:51:14 vmmysql kernel: []
>> blk_execute_rq_nowait+0xb5/0x170
>> Aug  8 02:51:14 vmmysql kernel: []
>> blk_execute_rq+0x8b/0x150
>> Aug  8 02:51:14 vmmysql kernel: [] ?
>> bio_phys_segments+0x19/0x20
>> Aug  8 02:51:14 vmmysql kernel: [] ?
>> blk_rq_bio_prep+0x31/0xb0
>> Aug  8 02:51:14 vmmysql kernel: [] ?
>> blk_rq_map_kern+0xc7/0x180
>> Aug  8 02:51:14 vmmysql kernel: []
>> scsi_execute+0xd3/0x170
>> Aug  8 02:51:14 vmmysql kernel: []
>> scsi_execute_req_flags+0x8e/0x100
>> Aug  8 02:51:14 vmmysql kernel: []
>> sr_check_events+0xbc/0x2d0 [sr_mod]
>> Aug  8 02:51:14 vmmysql kernel: []
>> cdrom_check_events+0x1e/0x40 [cdrom]
>> Aug  8 02:51:14 v

[ovirt-users] Re: VM --- is not responding.

2019-08-08 Thread Sandro Bonazzola
Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza 
ha scritto:

> Hi all,
> It is more days that for same vm I received this error, but I don't
> underdand why.
> The traffic of the virtual machine is not excessive, cpu and ram to, but
> for few minutes the vm is not responding. and in the messages log file of
> the vm I received the error under, yo can help me?
> thanks
>

can you check the S.M.A.R.T. health status of the disks?



> Edoardo
> kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s!
> [kworker/2:0:26227]
> Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc
> ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
> ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp
> llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
> nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat
> nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
> ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables
> ip6table_filter ip6_tables iptable_filter snd_hda_c
> odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel
> snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
>  glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm
> snd_timer snd soundcore virtio_rng sg virtio_balloon
> i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
> Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom
> virtio_net virtio_console virtio_scsi ata_generic p
> ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl
> floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
> ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio
> dm_mirror dm_region_hash dm_log dm_mod
> Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump:
> loaded Tainted: G L    3.10.0-957.12.1.el7.x86_64 #1
> Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS
> 1.11.0-2.el7 04/01/2014
> Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable
> disk_events_workfn
> Aug  8 02:51:14 vmmysql kernel: task: 9e25b6609040 ti:
> 9e27b161 task.ti: 9e27b161
> Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[]
>  [] _raw_spin_unlock_irqrestore+0x15/0x20
> Aug  8 02:51:14 vmmysql kernel: RSP: :9e27b1613a68  EFLAGS:
> 0286
> Aug  8 02:51:14 vmmysql kernel: RAX: 0001 RBX:
> 9e27b1613a10 RCX: 9e27b72a3d05
> Aug  8 02:51:14 vmmysql kernel: RDX: 9e27b729a420 RSI:
> 0286 RDI: 0286
> Aug  8 02:51:14 vmmysql kernel: RBP: 9e27b1613a68 R08:
> 0001 R09: 9e25b67fc198
> Aug  8 02:51:14 vmmysql kernel: R10: 9e27b45bd8d8 R11:
>  R12: 9e25b67fde80
> Aug  8 02:51:14 vmmysql kernel: R13: 9e25b67fc000 R14:
> 9e25b67fc158 R15: c032f8e0
> Aug  8 02:51:14 vmmysql kernel: FS:  ()
> GS:9e27b728() knlGS:
> Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS:  ES:  CR0:
> 80050033
> Aug  8 02:51:14 vmmysql kernel: CR2: 7f0c9e9b6008 CR3:
> 00023248 CR4: 003606e0
> Aug  8 02:51:14 vmmysql kernel: DR0:  DR1:
>  DR2: 
> Aug  8 02:51:14 vmmysql kernel: DR3:  DR6:
> fffe0ff0 DR7: 0400
> Aug  8 02:51:14 vmmysql kernel: Call Trace:
> Aug  8 02:51:14 vmmysql kernel: []
> ata_scsi_queuecmd+0x155/0x450 [libata]
> Aug  8 02:51:14 vmmysql kernel: [] ?
> ata_scsiop_inq_std+0xf0/0xf0 [libata]
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_dispatch_cmd+0xb0/0x240
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_request_fn+0x4cc/0x680
> Aug  8 02:51:14 vmmysql kernel: []
> __blk_run_queue+0x39/0x50
> Aug  8 02:51:14 vmmysql kernel: []
> blk_execute_rq_nowait+0xb5/0x170
> Aug  8 02:51:14 vmmysql kernel: []
> blk_execute_rq+0x8b/0x150
> Aug  8 02:51:14 vmmysql kernel: [] ?
> bio_phys_segments+0x19/0x20
> Aug  8 02:51:14 vmmysql kernel: [] ?
> blk_rq_bio_prep+0x31/0xb0
> Aug  8 02:51:14 vmmysql kernel: [] ?
> blk_rq_map_kern+0xc7/0x180
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_execute+0xd3/0x170
> Aug  8 02:51:14 vmmysql kernel: []
> scsi_execute_req_flags+0x8e/0x100
> Aug  8 02:51:14 vmmysql kernel: []
> sr_check_events+0xbc/0x2d0 [sr_mod]
> Aug  8 02:51:14 vmmysql kernel: []
> cdrom_check_events+0x1e/0x40 [cdrom]
> Aug  8 02:51:14 vmmysql kernel: []
> sr_block_check_events+0xb1/0x120 [sr_mod]
> Aug  8 02:51:14 vmmysql kernel: []
> disk_check_events+0x66/0x190
> Aug  8 02:51:14 vmmysql kernel: []
> disk_events_workfn+0x16/0x20
> Aug  8 02:51:14 vmmysql kernel: []
> process_one_work+0x17f/0x440
> Aug  8 02:51:14 vmmysql kernel: []
> worker_thread+0x126/0x3c0
> Aug  8 02:51:14 vmmysql kernel: [] ?
> manage_workers.isra.25+0x2a0/0x2a0
> Aug  8 02:51:14 vmmysql kernel: [] kthread+0xd1/0xe0
> Aug  8 02:51:14 vmmysql kernel: [] ?
> insert_kthread_work+0