[ovirt-users] virtio-net name enp1s0 and not expected ens3

2022-07-09 Thread Ladislav Humenik
Hallo, looking for some hint where one can change the guest network name (not 
with udev rules) but rather is there somewhere a code in ovirt/libvirt?

We have one from scratch installed ovirt engine (4.4.10.4-1.el8), where all 
guests have enp*s* network name, all others engines (like 13 upgraded recently 
from 4.3.10 to 4.4.10) have ens* network name.

In redhat KB https://access.redhat.com/solutions/3709641 is explained how is 
the naming of virtio-net working

What is also different, there is missing ID_NET_NAME_SLOT in udevadm output:
~]$ udevadm info /sys/class/net/enp1s0/
P: /devices/pci:00/:00:02.0/:01:00.0/virtio0/net/enp1s0
E: DEVPATH=/devices/pci:00/:00:02.0/:01:00.0/virtio0/net/enp1s0
E: ID_BUS=pci
E: ID_MODEL_FROM_DATABASE=Virtio network device
E: ID_MODEL_ID=0x1041
E: ID_NET_DRIVER=virtio_net
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: ID_NET_NAME=enp1s0
E: ID_NET_NAME_MAC=enx001a4a08000e
E: ID_NET_NAME_PATH=enp1s0
E: ID_NET_NAMING_SCHEME=rhel-8.0
E: ID_OUI_FROM_DATABASE=Qumranet Inc.
E: ID_PATH=pci-:01:00.0
E: ID_PATH_TAG=pci-_01_00_0
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Red Hat, Inc.
E: ID_VENDOR_ID=0x1af4
E: IFINDEX=2
E: INTERFACE=enp1s0
E: SUBSYSTEM=net
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/enp1s0
E: TAGS=:systemd:
E: USEC_INITIALIZED=3826556

Expected output of udeavm would be:
~]$ udevadm info /sys/class/net/ens3
P: /devices/pci:00/:00:03.0/virtio0/net/ens3
E: DEVPATH=/devices/pci:00/:00:03.0/virtio0/net/ens3
E: ID_BUS=pci
E: ID_MODEL_FROM_DATABASE=Virtio network device
E: ID_MODEL_ID=0x1000
E: ID_NET_DRIVER=virtio_net
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: ID_NET_NAME=ens3
E: ID_NET_NAME_MAC=enx001a4a070167
E: ID_NET_NAME_PATH=enp0s3
E: ID_NET_NAME_SLOT=ens3
E: ID_NET_NAMING_SCHEME=rhel-8.0
E: ID_OUI_FROM_DATABASE=Qumranet Inc.
E: ID_PATH=pci-:00:03.0
E: ID_PATH_TAG=pci-_00_03_0
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Red Hat, Inc.
E: ID_VENDOR_ID=0x1af4
E: IFINDEX=2
E: INTERFACE=ens3
E: SUBSYSTEM=net
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/ens3
E: TAGS=:systemd:
E: USEC_INITIALIZED=3487875

Tested with different OSes like centos/debian, always the same output. 
--
Ladislav Humenik
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RXC6B7YBJABFQBNVSQMNANCOB7KPJGWL/


[ovirt-users] virtio-net name enp1s0 and not expected ens3

2022-07-09 Thread Ladislav Humenik
Hallo, looking for some hint where one can change the guest network name
(not with udev rules) but rather is there somewhere a code in ovirt/libvirt?

We have one from scratch installed ovirt engine (4.4.10.4-1.el8), where all
guests have enp*s* network name, all others engines (like 13 upgraded
recently from 4.3.10 to 4.4.10) have ens* network name.

In redhat KB https://access.redhat.com/solutions/3709641 is explained how
is the naming of virtio-net working

What is also different, there is missing ID_NET_NAME_SLOT in udevadm output:
~]$ udevadm info /sys/class/net/enp1s0/
P: /devices/pci:00/:00:02.0/:01:00.0/virtio0/net/enp1s0
E: DEVPATH=/devices/pci:00/:00:02.0/:01:00.0/virtio0/net/enp1s0
E: ID_BUS=pci
E: ID_MODEL_FROM_DATABASE=Virtio network device
E: ID_MODEL_ID=0x1041
E: ID_NET_DRIVER=virtio_net
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: ID_NET_NAME=enp1s0
E: ID_NET_NAME_MAC=enx001a4a08000e
E: ID_NET_NAME_PATH=enp1s0
E: ID_NET_NAMING_SCHEME=rhel-8.0
E: ID_OUI_FROM_DATABASE=Qumranet Inc.
E: ID_PATH=pci-:01:00.0
E: ID_PATH_TAG=pci-_01_00_0
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Red Hat, Inc.
E: ID_VENDOR_ID=0x1af4
E: IFINDEX=2
E: INTERFACE=enp1s0
E: SUBSYSTEM=net
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/enp1s0
E: TAGS=:systemd:
E: USEC_INITIALIZED=3826556

Expected output of udeavm would be:
~]$ udevadm info /sys/class/net/ens3
P: /devices/pci:00/:00:03.0/virtio0/net/ens3
E: DEVPATH=/devices/pci:00/:00:03.0/virtio0/net/ens3
E: ID_BUS=pci
E: ID_MODEL_FROM_DATABASE=Virtio network device
E: ID_MODEL_ID=0x1000
E: ID_NET_DRIVER=virtio_net
E: ID_NET_LINK_FILE=/usr/lib/systemd/network/99-default.link
E: ID_NET_NAME=ens3
E: ID_NET_NAME_MAC=enx001a4a070167
E: ID_NET_NAME_PATH=enp0s3
E: ID_NET_NAME_SLOT=ens3
E: ID_NET_NAMING_SCHEME=rhel-8.0
E: ID_OUI_FROM_DATABASE=Qumranet Inc.
E: ID_PATH=pci-:00:03.0
E: ID_PATH_TAG=pci-_00_03_0
E: ID_PCI_CLASS_FROM_DATABASE=Network controller
E: ID_PCI_SUBCLASS_FROM_DATABASE=Ethernet controller
E: ID_VENDOR_FROM_DATABASE=Red Hat, Inc.
E: ID_VENDOR_ID=0x1af4
E: IFINDEX=2
E: INTERFACE=ens3
E: SUBSYSTEM=net
E: SYSTEMD_ALIAS=/sys/subsystem/net/devices/ens3
E: TAGS=:systemd:
E: USEC_INITIALIZED=3487875

Tested with different OSes like centos/debian, always the same output.
-- 

Ladislav Humenik

Systemadministrator / VI
IT Operations Hosting Infrastructure

IONOS SE | Hinterm Hauptbahnhof 5 | 76137 Karlsruhe | Deutschland
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7PG34SHGCR22QUR3YMUUSRO22J7JQGSC/


[ovirt-users] Re: oVirt and Netapp question

2019-11-10 Thread Ladislav Humenik
Hi, basically the aggregate free space is relevant, as you mentioned you 
are having thin volumes, right?


On 08.11.19 07:18, Vrgotic, Marko wrote:

Hi Ladislav,

Not sure if you sam my reply, so i give it another shot.

I just want to he clear on understanding what you wrote in your 
recommendation.


Please check my previous reply.

Thank you.

Sent from my iPhone

On 6 Nov 2019, at 11:52, Vrgotic, Marko  
wrote:




Hi Ladislav,

Thank you for the reply.

On the matter of your Recommendation, when you mention free space, 
were you referring to:


  * Space that is free/unallocated on the Aggregate, allowing the
AutoGrow
  * Or all that is allocated to the volume but not actual Data Space

Kindly awaiting your reply.

-

kind regards/met vrindelijke groet

Marko Vrgotic
ActiveVideo

*From: *Ladislav Humenik 
*Date: *Tuesday, 5 November 2019 at 20:45
*To: *"Vrgotic, Marko" 
*Cc: *"users@ovirt.org" 
*Subject: *Re: [ovirt-users] Re: oVirt and Netapp question

Hi,

for the NetApp part, here is copy-paste from the netapp kb:

Answer

The volume "Over Provisioned Space" value provided in OnCommand 
System Manager (OCSM) is the "Over Provisioned Size" field provided 
by the |volume show| 
<https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-cmpr-930/volume__show.html> 
at the cluster shell.


This value is determined by the following formula:

(volume size) - (volume used) - (volume space available) - (snapshot used 
space) == storage that can't be provided by the aggregate if written to the 
volume

for the ovirt part:

Ovirt will see the volume size available within thin-volume without 
any knowledge of the aggregate space (available/or not) in behind.


Recommendation:

on netapp side do volume auto-grow and you are safe as long as you 
have free space inside aggregate.


HTH

On 05.11.19 14:54, Vrgotic, Marko wrote:

Second attempt 😊

*From: *"Vrgotic, Marko" 
<mailto:m.vrgo...@activevideo.com>
*Date: *Monday, 4 November 2019 at 14:01
*To: *"users@ovirt.org" <mailto:users@ovirt.org>
 <mailto:users@ovirt.org>
*Subject: *oVirt and Netapp question

Dear oVirt,

Few months ago our production environment oVirt with main Shared
storage via NFS Netapp is live.

We have been deploying VMs with thin provisioned HDD 40GB based
template, CentOS 7.

The Netapp NFS v4  storage volume is 7TB in size, also Thin
Provisioned.

First attached screenshot shows the space allocation of the
production volume from Netapp side.

Is there anyone in oVirt community who would be able to tell me
the meaning of the 5.31 TB Over Provisioned Space?



Second attached is the info  of the production volume from oVirt
side:



What I want to understand is the way how is oVirt reading the
volume usage and Netapp and where is the difference.

Is the Over Allocated Space something that is just logically
used/reserved and will be intelligently re-allocated/re-used as
the actual Data Space Used grows or am I looking at oVIrt
actually hitting Critical Space Action Blocker and will have to
resize the volume?

If there is anyone from Netapp or with good Netapp experience
that is able to help understanding the data above better, thank
you in advance?

Kindly awaiting your reply.

-

kind regards/met vrindelijke groet

Marko Vrgotic
ActiveVideo



___

Users mailing list --users@ovirt.org  <mailto:users@ovirt.org>

To unsubscribe send an email tousers-le...@ovirt.org  
<mailto:users-le...@ovirt.org>

Privacy Statement:https://www.ovirt.org/site/privacy-policy/

oVirt Code of 
Conduct:https://www.ovirt.org/community/about/community-guidelines/

List 
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVGZMJHNNM3LO3OIA65KEY64Z5BIHZ7R/


--
Ladislav Humenik

System administrator / VI
IT Operations Hosting Infrastructure

1&1 IONOS SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany
Phone: +49 721 91374-8361
E-mail: ladislav.hume...@ionos.com | Web: www.ionos.de

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498

Vorstand: Dr. Christian Böing, Hüseyin Dogan, Hans-Henning Kettler, Matthias 
Steinberg, Achim Weiß
Aufsichtsratsvorsitzender: Markus Kadelke


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte Informationen 
enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat sind oder diese E-Mail 
irrtümlich erhalten haben, unterrichten Sie bitte den Absender und vernichten 
Sie diese E-Mail. Anderen als dem bestimmungsgemäßen Adressaten ist untersagt, 
diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise 
auch immer zu verwenden.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended rec

[ovirt-users] CVE-2018-12130

2019-05-15 Thread Ladislav Humenik

Hello,

Red Hat has yesterday  published a new flaw in CPU micro-architecture, 
and i already saw there is need to update vdsm.


Will you backport the update also for ovirt community and at least for 
the latest 4.2.x version too?



links to CVE:
https://access.redhat.com/security/cve/cve-2018-12130
https://access.redhat.com/security/vulnerabilities/mds

--
Kind regards
Ladislav Humenik
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q7J5PNXIQWAP2B7N73HPW6XGX2XCSRIZ/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread Ladislav Humenik
Hi, nope, no storage leases and no fencing at all (because of vlan 
separation between mgmt and RAC).


We have own HA fencing mechanism in place, which will trigger action 
over API when alarm in monitoring is triggered.


HTH

On 18.04.19 13:12, klaasdem...@gmail.com wrote:

Hi,
are you using ovirt storage leases? You'll need them if you want to 
handle a hypervisor completely unresponsive (including fencing 
actions) in a HA setting. Those storage leases use sanlock. If you use 
sanlock a VM gets killed if the lease is not renewable during a very 
short timeframe (60 seconds). That is what is killing the VMs during 
takeover. Before storage leases it seems to have worked because it 
would simply wait long enough for nfs to finish.


Greetings
Klaas

On 18.04.19 12:47, Ladislav Humenik wrote:
Hi, we have netapp nfs with ovirt in production and never experienced 
an outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS 
timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your 
guest VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i 
in /sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; 
done EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems 
to produce outages too long for sanlock to handle - that affects all 
VMs that have storage leases. NetApp says a "worst case" takeover 
time is 120 seconds. That would mean sanlock has already killed all 
VMs. Is anyone familiar with how we could setup oVirt to allow such 
storage outages? Do I need to use another type of storage for my 
oVirt VMs because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/

--
Ladislav Humenik

System administrator / VI




--
Ladislav Humenik

System administrator / VI
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7XBZSHLGDYWIYADMKHFFMWYTXNFHKHWM/


[ovirt-users] Re: oVirt and NetApp NFS storage

2019-04-18 Thread Ladislav Humenik
Hi, we have netapp nfs with ovirt in production and never experienced an 
outage during takeover/giveback ..
- the default ovirt mount options should also handle little NFS timeout 
(rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys)
- but to tune it little up you should set disk timeout inside your guest 
VMs to at least 180 and than you are safe


example:
|cat << EOF >>/etc/rc.d/rc.local # Increasing the timeout value for i in 
/sys/class/scsi_generic/*/device/timeout; do echo 180 > "\$i"; done EOF |



KR

On 18.04.19 10:45, klaasdem...@gmail.com wrote:

Hi,

I got a question regarding oVirt and the support of NetApp NFS 
storage. We have a MetroCluster for our virtual machine disks but a 
HA-Failover of that (active IP gets assigned to another node) seems to 
produce outages too long for sanlock to handle - that affects all VMs 
that have storage leases. NetApp says a "worst case" takeover time is 
120 seconds. That would mean sanlock has already killed all VMs. Is 
anyone familiar with how we could setup oVirt to allow such storage 
outages? Do I need to use another type of storage for my oVirt VMs 
because that NFS implementation is unsuitable for oVirt?



Greetings

Klaas
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSJJKK5UG57CCFYUUCXXM3LYQJW2ODWZ/


--
Ladislav Humenik

System administrator / VI

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6TLJ4KH4P5DH2RZFZUUKUYCY6SFQJHSN/


[ovirt-users] Re: Live storage migration is failing in 4.2.8

2019-04-15 Thread Ladislav Humenik

I guess from the libvirt-latest repository

On 12.04.19 16:09, Nir Soffer wrote:



On Fri, Apr 12, 2019, 12:07 Ladislav Humenik 
mailto:ladislav.hume...@1und1.de>> wrote:


Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8
version
(actually 9 ovirt engine nodes), where the live storage migration
stopped to work, and leave auto-generated snapshot behind.

If we power the guest VM down, the migration works as expected. Is
there
a known bug for this? Shall we open a new one?

Setup:
ovirt - Dell PowerEdge R630
     - CentOS Linux release 7.6.1810 (Core)
     - ovirt-engine-4.2.8.2-1.el7.noarch
     - kernel-3.10.0-957.10.1.el7.x86_64
hypervisors    - Dell PowerEdge R640
     - CentOS Linux release 7.6.1810 (Core)
     - kernel-3.10.0-957.10.1.el7.x86_64
     - vdsm-4.20.46-1.el7.x86_64
     - libvirt-5.0.0-1.el7.x86_64


This is known issue in libvirt < 5.2.

How did you get this version on CentOS 7.6?

On my CentOS 7.6 I have libvirt 4.5, which is not affected by this issue.

Nir

     - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
storage domain  - netapp NFS share


logs are attached

    -- 
    Ladislav Humenik


System administrator

___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/VSKUEPUOPJDSRWYYMZEKAVTZ62YP6UK2/


--
Ladislav Humenik

System administrator / VI
IT Operations Hosting Infrastructure

1&1 IONOS SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany
Phone:  +49 721 91374-8361
Mobile: +49 152 2929-6349
E-Mail: ladislav.hume...@1und1.de | Web: www.1und1.de

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498

Vorstand: Dr. Christian Böing, Hüseyin Dogan, Hans-Henning Kettler, Matthias 
Steinberg, Achim Weiß
Aufsichtsratsvorsitzender: Markus Kadelke


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte Informationen 
enthalten. Wenn Sie nicht der bestimmungsgemäße
Adressat sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie 
bitte den Absender und vernichten Sie diese E-Mail.
Anderen als dem bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu 
speichern, weiterzuleiten oder ihren Inhalt auf welche
Weise auch immer zu verwenden.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient of this e-mail, you are hereby
notified that saving, distribution or use of the content of this e-mail in any 
way is prohibited. If you have received this e-mail in error,
please notify the sender and delete the e-mail.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7D4DAPTZQH423D5PVCQ4WJFG75F7TGG3/


[ovirt-users] Live storage migration is failing in 4.2.8

2019-04-12 Thread Ladislav Humenik
Hello, we have recently updated few ovirts from 4.2.5 to 4.2.8 version 
(actually 9 ovirt engine nodes), where the live storage migration 
stopped to work, and leave auto-generated snapshot behind.


If we power the guest VM down, the migration works as expected. Is there 
a known bug for this? Shall we open a new one?


Setup:
ovirt - Dell PowerEdge R630
        - CentOS Linux release 7.6.1810 (Core)
        - ovirt-engine-4.2.8.2-1.el7.noarch
        - kernel-3.10.0-957.10.1.el7.x86_64
hypervisors    - Dell PowerEdge R640
        - CentOS Linux release 7.6.1810 (Core)
        - kernel-3.10.0-957.10.1.el7.x86_64
        - vdsm-4.20.46-1.el7.x86_64
        - libvirt-5.0.0-1.el7.x86_64
        - qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
storage domain  - netapp NFS share


logs are attached

--
Ladislav Humenik

System administrator

2019-04-12 10:39:25,503+0200 INFO  (jsonrpc/0) [api.virt] START 
diskReplicateStart(srcDisk={'device': 'disk', 'poolID': 
'b1a475aa-c084-46e5-b65a-bf4a47143c88', 'volumeID': 
'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'domainID': 
'e5bb3e8a-a9c6-4581-8c6a-67d4ee7609f5', 'imageID': 
'9a66bf0f-1333-4931-ad58-f6f1aa1143be'}, dstDisk={'device': 'disk', 'poolID': 
'b1a475aa-c084-46e5-b65a-bf4a47143c88', 'volumeID': 
'5c2738a4-4279-4cc3-a0de-6af1095f8879', 'domainID': 
'244dfdfb-2662-4103-9d39-2b13153f2047', 'imageID': 
'9a66bf0f-1333-4931-ad58-f6f1aa1143be'}) from=:::10.76.98.4,57566, 
flow_id=97b620d9-6e65-4573-9fdf-5b119764fbb7, 
vmId=71f27df0-f54f-4a2e-a51c-e61aa26b370d (api:46)
2019-04-12 10:39:25,513+0200 INFO  (jsonrpc/0) [vdsm.api] START 
prepareImage(sdUUID='244dfdfb-2662-4103-9d39-2b13153f2047', 
spUUID='b1a475aa-c084-46e5-b65a-bf4a47143c88', 
imgUUID='9a66bf0f-1333-4931-ad58-f6f1aa1143be', 
leafUUID='5c2738a4-4279-4cc3-a0de-6af1095f8879', allowIllegal=False) 
from=:::10.76.98.4,57566, flow_id=97b620d9-6e65-4573-9fdf-5b119764fbb7, 
task_id=78dde3c9-74fb-4588-8cfa-117f0bbe2d2d (api:46)
2019-04-12 10:39:25,630+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2
 (fileSD:623)
2019-04-12 10:39:25,631+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879
 (fileSD:623)
2019-04-12 10:39:25,632+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Creating 
domain run directory 
u'/var/run/vdsm/storage/244dfdfb-2662-4103-9d39-2b13153f2047' (fileSD:577)
2019-04-12 10:39:25,632+0200 INFO  (jsonrpc/0) [storage.fileUtils] Creating 
directory: /var/run/vdsm/storage/244dfdfb-2662-4103-9d39-2b13153f2047 mode: 
None (fileUtils:197)
2019-04-12 10:39:25,632+0200 INFO  (jsonrpc/0) [storage.StorageDomain] Creating 
symlink from 
/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be
 to 
/var/run/vdsm/storage/244dfdfb-2662-4103-9d39-2b13153f2047/9a66bf0f-1333-4931-ad58-f6f1aa1143be
 (fileSD:580)
2019-04-12 10:39:25,637+0200 INFO  (jsonrpc/0) [vdsm.api] FINISH prepareImage 
return={'info': {'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
 'type': 'file'}, 'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3-a0de-6af1095f8879',
 'imgVolumesInfo': [{'domainID': '244dfdfb-2662-4103-9d39-2b13153f2047', 
'leaseOffset': 0, 'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2',
 'volumeID': u'cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2', 'leasePath': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/cbe93bfe-9df0-4f12-a44e-9e8fa6ec24f2.lease',
 'imageID': '9a66bf0f-1333-4931-ad58-f6f1aa1143be'}, {'domainID': 
'244dfdfb-2662-4103-9d39-2b13153f2047', 'leaseOffset': 0, 'path': 
u'/rhev/data-center/mnt/removed-IP-address:_bs01aF1C1v1/244dfdfb-2662-4103-9d39-2b13153f2047/images/9a66bf0f-1333-4931-ad58-f6f1aa1143be/5c2738a4-4279-4cc3

[ovirt-users] Re: ipxe-roms-qemu question

2019-03-08 Thread Ladislav Humenik

Hi Michal,

I saw some old bugs where you were involved in same topic and I toughed 
you can help us.


So far I understand what are the hex numbers, but I could not figure out 
how can I check the old in-memory rom size (to identify all guests which 
must be power-cycled). Do you know the code?


0x2 hex == 131072 decimal == 128K
0x4 hex == 262144 decimal == 256K

~]# du -k /usr/share/ipxe/1af41000.rom
256    /usr/share/ipxe/1af41000.rom

Kind regards,
Ladislav


On 06.03.19 13:25, Ladislav Humenik wrote:

Hi all,

in the past we have used customized ipxe (to allow boot over network 
with 10G cards), now we have finally updated our hypervisors to the 
latest ipxe-roms-qemu
Of course the sum now differs and during live-migration the libvirtd 
throws this error:


Mar  4 11:37:14 hypevisor-01 libvirtd: 2019-03-04 10:37:14.084+: 
15862: error : qemuMigrationJobCheckStatus:1313 : operation failed: 
migration out job: unexpectedly failed
Mar  4 11:37:15 hypevisor-01 libvirtd: 2019-03-04T10:37:13.941040Z 
qemu-kvm: Length mismatch: :00:03.0/virtio-net-pci.rom: 0x2 in 
!= 0x4: Invalid argument
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941090Z 
qemu-kvm: error while loading state for instance 0x0 of device 'ram'
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941530Z 
qemu-kvm: load of migration failed: Invalid argument



is there an easy command we can use to identify what guests are still 
using the old .rom and must be powercycled ?


Thank you in advance

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUK66G7PCC6AHM6FP64FDBLI3E5INYJX/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ABEDAPLK2OXKDII5DEZFP2OQNZHSLMU5/


[ovirt-users] ipxe-roms-qemu question

2019-03-06 Thread Ladislav Humenik

Hi all,

in the past we have used customized ipxe (to allow boot over network 
with 10G cards), now we have finally updated our hypervisors to the 
latest ipxe-roms-qemu
Of course the sum now differs and during live-migration the libvirtd 
throws this error:


Mar  4 11:37:14 hypevisor-01 libvirtd: 2019-03-04 10:37:14.084+: 
15862: error : qemuMigrationJobCheckStatus:1313 : operation failed: 
migration out job: unexpectedly failed
Mar  4 11:37:15 hypevisor-01 libvirtd: 2019-03-04T10:37:13.941040Z 
qemu-kvm: Length mismatch: :00:03.0/virtio-net-pci.rom: 0x2 in 
!= 0x4: Invalid argument
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941090Z 
qemu-kvm: error while loading state for instance 0x0 of device 'ram'
Mar  4 11:37:15 hypevisor-0 libvirtd: 2019-03-04T10:37:13.941530Z 
qemu-kvm: load of migration failed: Invalid argument



is there an easy command we can use to identify what guests are still 
using the old .rom and must be powercycled ?


Thank you in advance

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUK66G7PCC6AHM6FP64FDBLI3E5INYJX/


[ovirt-users] How to change the master storage domain to another specific domain

2018-11-22 Thread Ladislav Humenik

Hello,

is there any plan to add the possibility (or working way) to change the 
storage master domain to a specific, let's say by admin chosen storage 
domain, without shutting down all guests and moving all other domains in 
to maintenance as described in this old solution: 
https://access.redhat.com/solutions/34923


Any API call or whatever will be welcome.


--
Ladislav Humenik
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M6KDWIHZDM6BA6EFRQAK553GAEZAZCKI/


Re: [ovirt-users] Unable to remove storage domain's

2018-02-21 Thread Ladislav Humenik

Hello again,

the result is: ERROR:  permission denied to create temporary tables in 
database "engine"



- I forgot to mention we do not run the DB on localhost, but on 
dedicated server which is managed by DB-admins. After granting the 
necessary TEMPORARY privileges:






engine-log:

2018-02-22 08:47:57,678+01 INFO 
[org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] 
(default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] Lock Acquired 
to object 
'EngineLock:{exclusiveLocks='[f5efd264-045b-48d5-b35c-661a30461de5=STORAGE]', 
sharedLocks=''}'
2018-02-22 08:47:57,694+01 INFO 
[org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] 
(default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] Running 
command: RemoveStorageDomainCommand internal: false. Entities affected 
:  ID: f5efd264-045b-48d5-b35c-661a30461de5 Type: StorageAction group 
DELETE_STORAGE_DOMAIN with role type ADMIN
2018-02-22 08:47:57,877+01 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] EVENT_ID: 
USER_REMOVE_STORAGE_DOMAIN(960), Correlation ID: 
6f250dbf-40d2-4017-861a-ae410fc382f5, Job ID: 
d825643c-3f2e-449c-a19d-dc55af74d153, Call Stack: null, Custom ID: null, 
Custom Event ID: -1, Message: Storage Domain bs09aF2C9kvm was removed by 
admin@internal
2018-02-22 08:47:57,881+01 INFO 
[org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] 
(default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[f5efd264-045b-48d5-b35c-661a30461de5=STORAGE]', 
sharedLocks=''}'



Thank you for your help,

Ladislav


On 22.02.2018 06:11, Eyal Shenitzky wrote:

So here is the Query:

BEGIN -- Creating a temporary table which will give all the images and 
the disks which resids on only the specified storage domain. (copied 
template disks on multiple storage domains will not be part of this 
table) CREATE TEMPORARY TABLE STORAGE_DOMAIN_MAP_TABLEAS SELECT image_guidAS image_id,

 disk_id
 FROM memory_and_disk_images_storage_domain_view
 WHERE storage_id = v_storage_domain_id

 EXCEPT SELECT image_guidAS image_id,
 disk_id
 FROM memory_and_disk_images_storage_domain_view
 WHERE storage_id != v_storage_domain_id;

 exception when othersthen TRUNCATETABLE STORAGE_DOMAIN_MAP_TABLE;

 INSERT INTO STORAGE_DOMAIN_MAP_TABLE
 SELECT image_guidAS image_id,
 disk_id
 FROM memory_and_disk_images_storage_domain_view
 WHERE storage_id = v_storage_domain_id

 EXCEPT SELECT image_guidAS image_id,
 disk_id
 FROM memory_and_disk_images_storage_domain_view
 WHERE storage_id != v_storage_domain_id;
END;
Try to run it and share the results please.

On Wed, Feb 21, 2018 at 4:01 PM, Eyal Shenitzky <mailto:eshen...@redhat.com>> wrote:


Note that destroy and remove are two different operations.

Did you try both?

On Wed, Feb 21, 2018 at 3:17 PM, Ladislav Humenik
mailto:ladislav.hume...@1und1.de>> wrote:

Hi, of course i did. I put these domain's first in to
maintenance, then Detached it from the datacenter.

The last step is destroy or remove "just name it" and this
last step is mysteriously not working.



and throwing sql exception which I attached before.

Thank you in advance
ladislav

On 21.02.2018 14:03, Eyal Shenitzky wrote:

Did you manage to set the domain to maintenance?

    If so you can try to 'Destroy' the domain.

On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik
mailto:ladislav.hume...@1und1.de>> wrote:

Hi, no


this table "STORAGE_DOMAIN_MAP_TABLE" is not present at
any of our ovirt's and

based on link

<http://?ui=2&ik=0712e9a1b3&view=fimg&th=161b8714ec199ae3&attid=0.1.1&disp=emb&attbid=ANGjdJ9PPOWU6bxvze9lPAM_2gBE-uu8fFppbzl2kuXMhSwyKx4QWo0sFYbLjNBgMhB0RdoOM_e16gleR75ZmUYnx3ndhB0CgYgXtvtBs7wNxOYf-o9rWv42duaUwsw&sz=s0-l75-ft&ats=1519218030205&rm=161b8714ec199ae3&zw&atsh=0>
this is just a temporary table. Can you point me to what
query should I test?

thank you in advance

Ladislav


On 21.02.2018 12:50, Eyal Shenitzky wrote:

According to the logs, it seems like you somehow missing
a table in the DB -
STORAGE_DOMAIN_MAP_TABLE.
4211-b98f-a37604642251] Command

'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand'
failed: CallableStatementCallback; bad SQL grammar
[{call force_delete_storage_domain(?)}]; nested
exception is org.postgresql.util.PSQLException: ERROR:
  

Re: [ovirt-users] Unable to remove storage domain's

2018-02-21 Thread Ladislav Humenik

Hi, yes, destroy is also not working and throwing SQL exception

detailed logs from engine by doing destroy in attachment

thank you in advance
ladislav

On 21.02.2018 15:01, Eyal Shenitzky wrote:

Note that destroy and remove are two different operations.

Did you try both?

On Wed, Feb 21, 2018 at 3:17 PM, Ladislav Humenik 
mailto:ladislav.hume...@1und1.de>> wrote:


Hi, of course i did. I put these domain's first in to maintenance,
then Detached it from the datacenter.

The last step is destroy or remove "just name it" and this last
step is mysteriously not working.



and throwing sql exception which I attached before.

Thank you in advance
ladislav

On 21.02.2018 14:03, Eyal Shenitzky wrote:

Did you manage to set the domain to maintenance?

If so you can try to 'Destroy' the domain.

On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik
mailto:ladislav.hume...@1und1.de>> wrote:

Hi, no


this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any
of our ovirt's and

based on link

<http://?ui=2&ik=0712e9a1b3&view=fimg&th=161b8714ec199ae3&attid=0.1.1&disp=emb&attbid=ANGjdJ9PPOWU6bxvze9lPAM_2gBE-uu8fFppbzl2kuXMhSwyKx4QWo0sFYbLjNBgMhB0RdoOM_e16gleR75ZmUYnx3ndhB0CgYgXtvtBs7wNxOYf-o9rWv42duaUwsw&sz=s0-l75-ft&ats=1519218030205&rm=161b8714ec199ae3&zw&atsh=0>
this is just a temporary table. Can you point me to what
query should I test?

thank you in advance

Ladislav


On 21.02.2018 12:50, Eyal Shenitzky wrote:

According to the logs, it seems like you somehow missing a
table in the DB -
STORAGE_DOMAIN_MAP_TABLE.
4211-b98f-a37604642251] Command
'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand'
failed: CallableStatementCallback; bad SQL grammar [{call
force_delete_storage_domain(?)}]; nested exception is
org.postgresql.util.PSQLException: ERROR: relation
"storage_domain_map_table" does not exist
    Did you tryied to run some SQL query which cause that issue?



On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik
mailto:ladislav.hume...@1und1.de>> wrote:

Hello,

we can not remove old NFS-data storage domains, this 4
are already deactivated and unattached:

engine=> select id,storage_name from storage_domains
where storage_name like 'bs09%';
id  | storage_name
--+---
 819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm
 9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm
 f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm
 a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm
(4 rows)


The only images which still resides in DB are OVF_STORE
templates:

engine=> select image_guid,storage_name,disk_description
from images_storage_domain_view where storage_name like
'bs09%';
image_guid  | storage_name  | disk_description

--+---+--
 6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm  |
OVF_STORE
 997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm  |
OVF_STORE
 2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm |
OVF_STORE
 85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm  |
OVF_STORE
 bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm  |
OVF_STORE
 797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm |
OVF_STORE
 5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm |
OVF_STORE
 dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm |
OVF_STORE
(8 rows)



Current oVirt Engine version: 4.1.8.2-1.el7.centos
Exception logs from engine are in attachment

Do you have any magic sql statement to figure out what
is causing this exception and how we can remove those
storage domains without disruption ?

Thank you in advance

-- 
Ladislav Humenik



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




-- 
Regards,

Eyal Shenitzky


-- 
Ladislav Humenik


System administrator / VI
IT Operations Hos

Re: [ovirt-users] Unable to remove storage domain's

2018-02-21 Thread Ladislav Humenik
Hi, of course i did. I put these domain's first in to maintenance, then 
Detached it from the datacenter.


The last step is destroy or remove "just name it" and this last step is 
mysteriously not working.




and throwing sql exception which I attached before.

Thank you in advance
ladislav

On 21.02.2018 14:03, Eyal Shenitzky wrote:

Did you manage to set the domain to maintenance?

If so you can try to 'Destroy' the domain.

On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik 
mailto:ladislav.hume...@1und1.de>> wrote:


Hi, no


this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any of our
ovirt's and

based on link


this is just a temporary table. Can you point me to what query
should I test?

thank you in advance

Ladislav


On 21.02.2018 12:50, Eyal Shenitzky wrote:

According to the logs, it seems like you somehow missing a table
in the DB -
STORAGE_DOMAIN_MAP_TABLE.
4211-b98f-a37604642251] Command
'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand'
failed: CallableStatementCallback; bad SQL grammar [{call
force_delete_storage_domain(?)}]; nested exception is
org.postgresql.util.PSQLException: ERROR: relation
"storage_domain_map_table" does not exist
Did you tryied to run some SQL query which cause that issue?



On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik
mailto:ladislav.hume...@1und1.de>> wrote:

Hello,

we can not remove old NFS-data storage domains, this 4 are
already deactivated and unattached:

engine=> select id,storage_name from storage_domains where
storage_name like 'bs09%';
  id  | storage_name
--+---
 819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm
 9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm
 f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm
 a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm
(4 rows)


The only images which still resides in DB are OVF_STORE
templates:

engine=> select image_guid,storage_name,disk_description from
images_storage_domain_view where storage_name like 'bs09%';
  image_guid  | storage_name  |
disk_description

--+---+--
 6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm  | OVF_STORE
 997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm  | OVF_STORE
 2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE
 85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm  | OVF_STORE
 bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm  | OVF_STORE
 797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE
 5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE
 dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE
(8 rows)



Current oVirt Engine version: 4.1.8.2-1.el7.centos
Exception logs from engine are in attachment

Do you have any magic sql statement to figure out what is
causing this exception and how we can remove those storage
    domains without disruption ?

Thank you in advance

-- 
Ladislav Humenik



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




-- 
Regards,

Eyal Shenitzky


-- 
Ladislav Humenik


System administrator / VI
IT Operations Hosting Infrastructure

1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany
Phone:+49 721 91374-8361 
E-Mail:ladislav.hume...@1und1.de <mailto:ladislav.hume...@1und1.de>  | 
Web:www.1und1.de <http://www.1und1.de>

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498

Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias 
Steinberg
Aufsichtsratsvorsitzender: René Obermann


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte Informationen 
enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat sind oder diese E-Mail 
irrtümlich erhalten haben, unterrichten Sie bitte den Absender und vernichten 
Sie diese E-Mail. Anderen als dem bestimmungsgemäßen Adressaten ist untersagt, 
diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise 
auch immer zu verwenden.

This e-mail may contain confidential and/or privileged information. If you 
are not the intended recipient of this e-mail, you are hereby notified that 
saving, distribution or use of the con

[ovirt-users] Unable to remove storage domain's

2018-02-21 Thread Ladislav Humenik

Hello,

we can not remove old NFS-data storage domains, this 4 are already 
deactivated and unattached:


engine=> select id,storage_name from storage_domains where storage_name 
like 'bs09%';

  id  | storage_name
--+---
 819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm
 9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm
 f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm
 a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm
(4 rows)


The only images which still resides in DB are OVF_STORE templates:

engine=> select image_guid,storage_name,disk_description from 
images_storage_domain_view where storage_name like 'bs09%';

  image_guid  | storage_name  | disk_description
--+---+--
 6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm  | OVF_STORE
 997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm  | OVF_STORE
 2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE
 85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm  | OVF_STORE
 bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm  | OVF_STORE
 797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE
 5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE
 dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE
(8 rows)



Current oVirt Engine version: 4.1.8.2-1.el7.centos
Exception logs from engine are in attachment

Do you have any magic sql statement to figure out what is causing this 
exception and how we can remove those storage domains without disruption ?


Thank you in advance

--
Ladislav Humenik

2018-02-21 09:50:02,484+01 INFO  
[org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default 
task-154) [8badc63f-80cf-4211-b98f-a37604642251] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[f5efd264-045b-48d5-b35c-661a30461de5=STORAGE]', 
sharedLocks=''}'
2018-02-21 09:50:02,509+01 INFO  
[org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default 
task-154) [8badc63f-80cf-4211-b98f-a37604642251] Running command: 
RemoveStorageDomainCommand internal: false. Entities affected :  ID: 
f5efd264-045b-48d5-b35c-661a30461de5 Type: StorageAction group 
DELETE_STORAGE_DOMAIN with role type ADMIN
2018-02-21 09:50:02,527+01 INFO  
[org.ovirt.engine.core.utils.transaction.TransactionSupport] (default task-154) 
[8badc63f-80cf-4211-b98f-a37604642251] transaction rolled back
2018-02-21 09:50:02,529+01 ERROR 
[org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default 
task-154) [8badc63f-80cf-4211-b98f-a37604642251] Command 
'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' failed: 
CallableStatementCallback; bad SQL grammar [{call 
force_delete_storage_domain(?)}]; nested exception is 
org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" 
does not exist
  Where: SQL statement "TRUNCATE TABLE STORAGE_DOMAIN_MAP_TABLE"
PL/pgSQL function remove_entities_from_storage_domain(uuid) line 21 at SQL 
statement
SQL statement "SELECT Remove_Entities_From_storage_domain(v_storage_domain_id)"
PL/pgSQL function force_delete_storage_domain(uuid) line 3 at PERFORM
2018-02-21 09:50:02,530+01 ERROR 
[org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default 
task-154) [8badc63f-80cf-4211-b98f-a37604642251] Exception: 
org.springframework.jdbc.BadSqlGrammarException: CallableStatementCallback; bad 
SQL grammar [{call force_delete_storage_domain(?)}]; nested exception is 
org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" 
does not exist
  Where: SQL statement "TRUNCATE TABLE STORAGE_DOMAIN_MAP_TABLE"
PL/pgSQL function remove_entities_from_storage_domain(uuid) line 21 at SQL 
statement
SQL statement "SELECT Remove_Entities_From_storage_domain(v_storage_domain_id)"
PL/pgSQL function force_delete_storage_domain(uuid) line 3 at PERFORM
at 
org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:231)
 [spring-jdbc.jar:4.2.4.RELEASE]
at 
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73)
 [spring-jdbc.jar:4.2.4.RELEASE]
at 
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1094) 
[spring-jdbc.jar:4.2.4.RELEASE]
at 
org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1130) 
[spring-jdbc.jar:4.2.4.RELEASE]
at 
org.springframework.jdbc.core.simple.AbstractJdbcCall.executeCallInternal(AbstractJdbcCall.java:405)
 [spring-jdbc.jar:4.2.4.RELEASE]
at 
org.springframework.jdbc.core.simple.AbstractJdbcCall.doExecute(AbstractJdbcCall.java:365)
 [spring-jdbc.jar:4.2.4.RELEASE]
at 
or

[ovirt-users] XML error ovirt-4.2.1 release

2018-02-16 Thread Ladislav Humenik

Hello all,


we just tested the 4.2.0 release, worked fine so far.

Yesterday just updated to the latest 4.2.1 and since then we can not 
send and receive response; the error has to do with the response to the 
server:


checkContentType(XML_CONTENT_TYPE_RE, "XML", 
response.getFirstHeader("content-type").getValue());


it seems it is not XML type

the error is:

        throw new Error("Failed to send request", e);

Through the web api I can connect and get and see all, but through the 
SDK it exits. I've tried both 420 and 421 SDKs of ovirt.



--
Ladislav Humenik

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Feature: Maximum memory size | BUG

2017-07-27 Thread Ladislav Humenik

Hello, thank you for fast response.

The point of changing the max_mem_size is little tricky, as we have 
environments with together ~13k VMs running on top of ovirt, no kidding 
:) and currently doing upgrades from 3.6 > 4.1. So we can easy set those 
values to the needed ones for every VM, thats not a big deal.


However, I was just hoping it is editable without changing/modifying our 
current API calls, for any newly created VM (as this is the reason for 
asking for help), somewhere in engine-config or in DB.


I'll just opened an RFE https://bugzilla.redhat.com/show_bug.cgi?id=1475382

Best regards,
Ladislav Humenik

On 26.07.2017 15:58, Michal Skrivanek wrote:


On 26 Jul 2017, at 16:50, Jakub Niedermertl <mailto:jnied...@redhat.com>> wrote:


Hello Ladislav,

the function computing size of default maximum memory size is 
currently not configurable from DB.


If you want it to be so please feel free to file an RFE [1].

[1]: https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

Best regards,
Jakub

On Wed, Jul 26, 2017 at 1:59 PM, Ladislav Humenik 
mailto:ladislav.hume...@1und1.de>> wrote:


Hallo, after engine update to 4.1.2 from 4.0.6 we have following bug

*Your Future Description: *

Maximum memory value is stored in |VmBase.maxMemorySizeMb|
property. It is validated against range [/memory of VM/,
/*MaxMemorySizeInMB/], where /*MaxMemorySizeInMB/ is one of
|VM32BitMaxMemorySizeInMB|, |VM64BitMaxMemorySizeInMB| and
|VMPpc64BitMaxMemorySizeInMB| configuration options depending on
selected operating system of the VM. Default value in webadmin UI
is 4x size of memory.

During migration of engine 4.0 -> 4.1 all VM-like entities will
get max memory = 4x memory.

If a VM (or template) is imported (from export domain, snapshot,
external system) and doesn't have max memory set yet, the maximum
value of max memory is set (|*MaxMemorySizeInMB| config options).

*Our engine settings:*
[root@ovirt]# engine-config -g VM64BitMaxMemorySizeInMB
VM64BitMaxMemorySizeInMB: 8388608 version: 4.1
VM64BitMaxMemorySizeInMB: 8388608 version: 3.6
VM64BitMaxMemorySizeInMB: 8388608 version: 4.0
[root@ovirt# engine-config -g VM32BitMaxMemorySizeInMB
VM32BitMaxMemorySizeInMB: 20480 version: general

*Template:
*engine=# select vm_guid,vm_name,mem_size_mb,max_memory_size_mb
from vm_static where vm_name LIKE 'Blank';
   vm_guid| vm_name | mem_size_mb |
max_memory_size_mb

--+-+-+
 ---- | Blank   |8192 | 
32768

(1 row)

*Created VM*
- expected is mem_size_mb * VM64BitMaxMemorySizeInMB
- we get mem_size_mb * 4 (default)

*Engine: *engine=# select
vm_guid,vm_name,mem_size_mb,max_memory_size_mb from vm_static
where vm_name LIKE 'vm-hotplug%';
   vm_guid|   vm_name   | mem_size_mb
| max_memory_size_mb

--+-+-+
 254a0c61-3c0a-41e7-a2ec-5f77cabbe533 | vm-hotplug  |1024
|   4096
 c0794a03-58ba-4e68-8f43-e0320032830c | vm-hotplug2 |3072
|  12288
(2 rows)

*Question:*
It is possible to change this (default * 4) behavior in DB??



if the issue is with GUI then setting the max memory in the template 
would be inherited to all VMs from that template, you can even change 
that in “Blank” I think, Jakube?
That’s for the default case, you can change that any way you like for 
the concrete VM you’re creating.
if the issue is with API you can simply provide any number for the max 
mem in all the requests


the VM[64|32]BitMaxMemorySizeInMB values are for the total maximum the 
particular qemu we ship supports, it’s not anything you should need to 
change.


Thanks,
michal



Kind Regards,
Ladislav Humenik, System administrator


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users




--
Ladislav Humenik

System administrator / VI
IT Operations Hosting Infrastructure

1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany
Phone: +49 721 91374-8361
E-Mail: ladislav.hume...@1und1.de | Web: www.1und1.de

Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498

Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg
Aufsichtsratsvorsitzender: René Obermann


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte Informationen 
enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat sind oder diese E-Mail 
irrtümlich erhalten haben, unterrichten Sie bitte den Absender und vernichten 
Sie diese E-Mail. Anderen als dem bestimmung

[ovirt-users] Feature: Maximum memory size | BUG

2017-07-27 Thread Ladislav Humenik

Hallo, after engine update to 4.1.2 from 4.0.6 we have following bug

*Your Future Description: *

Maximum memory value is stored in |VmBase.maxMemorySizeMb| property. It 
is validated against range [/memory of VM/, /*MaxMemorySizeInMB/], where 
/*MaxMemorySizeInMB/ is one of |VM32BitMaxMemorySizeInMB|, 
|VM64BitMaxMemorySizeInMB| and |VMPpc64BitMaxMemorySizeInMB| 
configuration options depending on selected operating system of the VM. 
Default value in webadmin UI is 4x size of memory.


During migration of engine 4.0 -> 4.1 all VM-like entities will get max 
memory = 4x memory.


If a VM (or template) is imported (from export domain, snapshot, 
external system) and doesn't have max memory set yet, the maximum value 
of max memory is set (|*MaxMemorySizeInMB| config options).


*Our engine settings:*
[root@ovirt]# engine-config -g VM64BitMaxMemorySizeInMB
VM64BitMaxMemorySizeInMB: 8388608 version: 4.1
VM64BitMaxMemorySizeInMB: 8388608 version: 3.6
VM64BitMaxMemorySizeInMB: 8388608 version: 4.0
[root@ovirt# engine-config -g VM32BitMaxMemorySizeInMB
VM32BitMaxMemorySizeInMB: 20480 version: general

*Template:
*engine=# select vm_guid,vm_name,mem_size_mb,max_memory_size_mb from 
vm_static where vm_name LIKE 'Blank';
   vm_guid| vm_name | mem_size_mb | 
max_memory_size_mb

--+-+-+
 ---- | Blank   |8192 
|  32768

(1 row)

*Created VM*
- expected is mem_size_mb * VM64BitMaxMemorySizeInMB
- we get mem_size_mb * 4 (default)

*Engine: *engine=# select vm_guid,vm_name,mem_size_mb,max_memory_size_mb 
from vm_static where vm_name LIKE 'vm-hotplug%';
   vm_guid|   vm_name   | mem_size_mb | 
max_memory_size_mb

--+-+-+
 254a0c61-3c0a-41e7-a2ec-5f77cabbe533 | vm-hotplug  |1024 
|   4096
 c0794a03-58ba-4e68-8f43-e0320032830c | vm-hotplug2 |3072 
|  12288

(2 rows)

*Question:*
It is possible to change this (default * 4) behavior in DB??

Kind Regards,
Ladislav Humenik, System administrator

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users