Re: [ovirt-users] I wrote an oVirt thing

2016-11-28 Thread Konstantin Shalygin
at work is running Ubuntu, and I do not believe that ovirt-shell is packaged for it. -- Best regards, Konstantin Shalygin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] I wrote an oVirt thing

2016-11-29 Thread Konstantin Shalygin
y On 11/29/2016 07:06 PM, Yaniv Kaul wrote: On Tue, Nov 29, 2016 at 3:40 AM, Konstantin Shalygin <k0...@k0ste.ru <mailto:k0...@k0ste.ru>> wrote: ovirt-shell will be deprecated and not supported or some functions on ovirt-shell (or all package ovirt-engine-cli)? We use

[ovirt-users] oVirt, Cinder, Ceph in 2017

2017-03-22 Thread Konstantin Shalygin
Hello. Try to use this guide Cinder and Glance integration - this is actually on 2017? I try to use it and get this error on oVirt 4.1 (CentOS 7.3): 2017-03-22 11:34:19 INFO

Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Konstantin Shalygin
You should configure your LAG with this options (custom mode on oVirt): mode=4 miimon=100 xmit_hash_policy=2 lacp_rate=1 An tell to your network admin configure switch: "Give me lacp timeout short with channel-group mode active. Also set port-channel load-balance src-dst-mac-ip (or

Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Konstantin Shalygin
We have many issues migration with 1G and 18-25 vms. Is slow, stuck, failed. Switched to 10G and set migration limit to 5000Mbps (actually this is don't work, but if don't set this field, limit is 1000Mbps!) - 25vms migrate ~ 30seconds in total. On 04/19/2017 07:41 PM, Nelson Lameiras

Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-18 Thread Konstantin Shalygin
Hello. What is your Migration Network? We have some hosts that have 60 vms. So this will create a 60 vms migrating simultaneously. Some vms are under so much heavy loads that migration fails often (our guess is that massive simultaneous migrations does not help migration convergence) - even

Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Konstantin Shalygin
ystèmes et Réseaux / Systems and Networks engineer Tel: +33 5 32 09 09 70 nelson.lamei...@lyra-network.com www.lyra-network.com | www.payzen.eu Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE - Original Message ----- From: "Konstantin Shalygin&q

Re: [ovirt-users] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-09 Thread Konstantin Shalygin
If you just need Cinder (for example for use Ceph with oVirt), and not a docker container then try to use RDO project. A few month ago I was start from this images, then switched to RDO and setup a VM on host with oVirt manager. Still works flawless.

Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin
/2017 11:36 AM, Vinícius Ferrão wrote: It’s the hypervisor appliance, just like RHVH. -- Best regards, Konstantin Shalygin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin
Not for hosted engine, with ovirt-engine of course. On 07/04/2017 11:27 AM, Yaniv Kaul wrote: How are you using Ceph for hosted engine? -- Best regards, Konstantin Shalygin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman

Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin
Hello, I’m deploying oVirt for the first time and a question has emerged: what is the good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during the oVirt Node installation in Anaconda, or it should be done in a posterior moment inside the Hosted Engine manager? In my

Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin
comments. It's safe to deploy this way? Should I use NFS instead? -- Best regards, Konstantin Shalygin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin
I don't know what is oVirt Node :) And for "generic_linux" I have 95% automation (work in progress). On 07/04/2017 11:20 AM, Vinícius Ferrão wrote: Just abusing a little more, why you use CentOS instead of oVirt Node? What’s the reason behind this choice? -- Best regards,

Re: [ovirt-users] How to change network card configuration under bridge on host?

2017-10-13 Thread Konstantin Shalygin
Yet I suspect if I change ifcfg-eno1 and ifcfg-eno2 by hand, they will just get replaced at the next reboot by ovirt. Just disable in BIOS your integrated nic, and add udev rules (for new nic), so new nic replace old nic 1:1. ___ Users mailing list

Re: [ovirt-users] using oVirt with newer librbd1

2017-10-24 Thread Konstantin Shalygin
with Ceph 12? Thanks. -- Best regards, Konstantin Shalygin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] "Enable Discard" for Ceph/Cinder Disks?

2017-11-27 Thread Konstantin Shalygin
according tohttp://docs.ceph.com/docs/luminous/rbd/qemu-rbd/ the use of Discard/TRIM for Ceph RBD disks is possible. Openstack seems to have implemented it (https://www.sebastien-han.fr/blog/2015/02/02/openstack-and-ceph-rbd-discard/). In oVirt there is no option "Enable Discard" for Cinder

Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-17 Thread Konstantin Shalygin
I thought, I can get my needed values from there, but all files are empty. Looking at this post:http://lists.ovirt.org/pipermail/users/2017-January/079011.html this should work. Is this normal on centos 7.3 with oVirt installed? How can I get those values, without monitoring all VMs

Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-17 Thread Konstantin Shalygin
Specifically for IO statistics, VDSM reads the values from libvirt[1]. cgroup limiting is possible if you define it, but is unrelated. Also note that 7.3 is a bit ancient, I'm not sure how supported it is with latest 4.1 - which I'm sure will pull new dependencies from 7.4 (for example,

Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-18 Thread Konstantin Shalygin
On 12/18/2017 09:02 PM, Yaniv Kaul wrote: We provide the required scripts to install OpenShift with the EFK stack, configure it and the hosts with all relevant details to connect the two. Note that the metrics store also processes the engine and VDSM logs. Good to know. But if I still want

Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-18 Thread Konstantin Shalygin
On 12/18/2017 07:58 PM, Yaniv Kaul wrote: Indeed. 4.2 provides a comprehensive solution, with integration via Collectd -> fluentd -> Elastic -> Kibana. Y. E.g. integrated to oVirt or "admin can send metrics to ELK"? ___ Users mailing list

Re: [ovirt-users] using oVirt with newer librbd1

2017-11-18 Thread Konstantin Shalygin
we're also using cinder from openstack ocata release. the point is a) we didn't upgrade, but started from scratch with ceph 12 b) we didn't test all of the new features in ceph 12 (eg. EC pools for RBD devices) in connection with cinder yet We are live on librbd1-12.2.1 for a week. All works

Re: [ovirt-users] using oVirt with newer librbd1

2017-10-25 Thread Konstantin Shalygin
-img/rados or cp/rsync inside VM. -- Best regards, Konstantin Shalygin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] using oVirt with newer librbd1

2017-10-24 Thread Konstantin Shalygin
-- Best regards, Konstantin Shalygin ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Konstantin Shalygin
if I can just have Ceph...I would be a very happy sys admin! What is stopped you from start use ceph via librbd NOW? All you need is a OpenStack Cinder as volume manager wrapper. You can check librbd version of your hosts via oVirt manager (see attached sceenshot). I read in RHV 4.2

Re: [ovirt-users] oVirt 4.2 with cheph

2018-02-19 Thread Konstantin Shalygin
Hello, does someone have experience with cephfs as a vm-storage domain? I think about that but without any hints... Thanks for pointing me... This is bad idea. Use rbd - this is interface for VM, cephfs is for different things. k ___ Users

Re: [ovirt-users] Ceph Cinder QoS

2018-03-14 Thread Konstantin Shalygin
has someone experienced the same problem? Is there someone who have a working cinder qos? How exactly? Storage profiles is not present for external providers - so lack of this feature. For now only way to do that is vdsm hook. https://bugzilla.redhat.com/show_bug.cgi?id=1550145 k

[ovirt-users] How-to create migration network between 3 hosts without switch?

2019-06-18 Thread Konstantin Shalygin
Hi oVirters, I have a network topology: host with two NIC's ports - each host connected with two another hosts.     host2   / \ host1  /     \    \  / \  /         host3 oVirt can't setup multiple

[ovirt-users] Re: [ANN] oVirt 4.3.7 First Release Candidate is now available for testing

2019-10-28 Thread Konstantin Shalygin
The oVirt Project is pleased to announce the availability of the oVirt 4.3.7 First Release Candidate for testing, as of October 18th, 2019. Sandro, thanks for announce. oVirt 4.3 still have "OpenStack Block Storage" provider support? Because we want upgrade our oVirt 4.2.8 DC's and we use

[ovirt-users] [urgent] oVirt 4.3 -> 4.4 production upgrade: OpenStack Provider regression

2020-12-05 Thread Konstantin Shalygin
Create ticket for this: https://bugzilla.redhat.com/show_bug.cgi?id=1904669 ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of

[ovirt-users] [urgent] oVirt 4.3 -> 4.4 production upgrade: OpenStack Provider regression

2020-12-04 Thread Konstantin Shalygin
Hello. I was upgrade our ovirt-engine from 4.3 to latest 4.4.3.12. Seems all flawless, except our storage domain (Cinder). Currently our clusters can't start VM, create Disks, resize disks, etc. Only migration works. The root cause: ovirt missing project_id in API call: ovirt 4.3 call:

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-21 Thread Konstantin Shalygin
On 21.12.2020 16:22, Sandro Bonazzola wrote: The oVirt project is excited to announce the general availability of oVirt 4.4.4 , as of December 21st, 2020. Sandro, is any plans to fix for OpenStack provider regressions for 4.4 release? Thanks, k

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-22 Thread Konstantin Shalygin
wner who can catch this oVirt 4.4 only bugs. On 22.12.2020 12:01, Sandro Bonazzola wrote: Il giorno lun 21 dic 2020 alle ore 18:33 Konstantin Shalygin mailto:k0...@k0ste.ru>> ha scritto: Sandro, after my mention my two bugs was closed as deprecated feature of "old Cinde

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-21 Thread Konstantin Shalygin
deprecate it just by wave a hand?路‍♂️ Thanks, k Sent from my iPhone > On 21 Dec 2020, at 18:09, Sandro Bonazzola wrote: > >  > > >> Il giorno lun 21 dic 2020 alle ore 15:57 Konstantin Shalygin >> ha scritto: >> On 21.12.2020 16:22, Sandro Bonazzo

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-28 Thread Konstantin Shalygin
Currently integration don't need nbd or krbd. Just qemu process. k Sent from my iPhone > On 28 Dec 2020, at 15:28, Benny Zlotnik wrote: > > On Tue, Dec 22, 2020 at 6:33 PM Konstantin Shalygin wrote: >> >> Sandro, FYI we are not against cinderlib integration, more th

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-20 Thread Konstantin Shalygin
I understood, more than the code that works with qemu already exists for openstack integration k Sent from my iPhone > On 14 Jan 2021, at 09:43, Gorka Eguileor wrote: > > If using QEMU to directly connect RBD volumes is the preferred option, > then that code would have to be added to oVirt

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Konstantin Shalygin
All connection data should be comes from cinderlib, as for current cinder integration. Gorka says the same Thanks, k Sent from my iPhone > On 21 Jan 2021, at 16:54, Nir Soffer wrote: > > To make this work, engine needs to configure the ceph authentication > secrets on all hosts in the DC.

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Konstantin Shalygin
Shantur, this is oVirt. You always should make master domain. It’s enough some 1GB NFS on manager side. k > On 22 Jan 2021, at 12:02, Shantur Rathore wrote: > > Just a bump. Any ideas anyone? ___ Users mailing list -- users@ovirt.org To

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Beware about Ceph and oVirt Managed Block Storage, current integration is only possible with kernel, not with qemu-rbd. k Sent from my iPhone > On 18 Jan 2021, at 13:00, Shantur Rathore wrote: > >  > Thanks Strahil for your reply. > > Sorry just to confirm, > > 1. Are you saying Ceph on

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Yep, BZ is https://bugzilla.redhat.com/show_bug.cgi?id=1539837 https://bugzilla.redhat.com/show_bug.cgi?id=1904669 https://bugzilla.redhat.com/show_bug.cgi?id=1905113

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Faster than fuse-rbd, not qemu. Main issue is kernel pagecache and client upgrades, for example cluster with 700 osd and 1000 clients we need update client version for new features. With current oVirt realization we need update kernel then reboot host. With librbd we just need update package

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t use Ceph Storage. Current storage team support in oVirt just can break something and do not work with this anymore, take a look what I talking about: in [1], [2], [3] k [1]

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
> On 19 Jan 2021, at 13:39, Shantur Rathore wrote: > > I have tested all options but oVirt seems to tick most required boxes. > > OpenStack : Too complex for use case > Proxmox : Love Ceph support but very basic clustering support > OpenNebula : Weird VM state machine. > > Not sure if you

[ovirt-users] Re: Migration from deprecated OpenStack provider to cinderlib

2021-05-07 Thread Konstantin Shalygin
Thanks Sandro, wait for Eyal k Sent from my iPhone > On 30 Apr 2021, at 10:03, Sandro Bonazzola wrote: > > > >> Il giorno ven 30 apr 2021 alle ore 08:48 Konstantin Shalygin >> ha scritto: >> Hi Sandro, >> >> The question is - will ovirt plan

[ovirt-users] Re: Updates failing

2021-07-08 Thread Konstantin Shalygin
You should put your host to maintaince mode before install updates Cheers, k Sent from my iPhone > On 6 Jul 2021, at 23:42, Gary Pedretty wrote: > > Getting errors trying to run dnf/yum update due to a vdsm issue. > > > yum update > Last metadata expiration check: 0:17:33 ago on Tue 06 Jul

[ovirt-users] Re: [ceph-users] osd nearfull is not detected

2021-04-27 Thread Konstantin Shalygin
Create tracker for this issue [1] [1] https://tracker.ceph.com/issues/50533 k > On 21 Apr 2021, at 21:21, Dan van der Ster > wrote: > > Are you currently doing IO on the relevant pool? Maybe nearfull isn't > reported until

[ovirt-users] Re: oVirt 2021 Spring survey questions

2021-04-30 Thread Konstantin Shalygin
Hi Sandro, The question is - will ovirt plan to provide database migration scripts from deprecated OpenStack provider to cinderlib? I mean put in survey actual users and quantity of images in domain Thanks, k Sent from my iPhone ___ Users mailing

[ovirt-users] Re: [ANN] oVirt 4.4.5 Fifth Release Candidate is now available for testing

2021-02-11 Thread Konstantin Shalygin
Is there any plans to fix [1] and [2] in 4.4? After no feedback (from dec 2020) from oVirt team I decide to drop oVirt 4.4 engine, and revert to 4.3. Current cinder integration broken broken in 4.4, but marked for deprecation only in 4.5 [3] Thanks, k [1]

[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Konstantin Shalygin
Hi Sandro, - How this image is mounted on oVirt host? - How to change image features? - How to add upmap option to libvirt domain? - How libvirt domain looks like? - How snapshots works? - How clones works? - How to migrate images from one domain to another? Thanks, k Sent from my iPhone > On

[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Konstantin Shalygin
It's possible to use librbd instead kernel mount like in OpenStack? Sent from my iPhone > On 14 Jul 2021, at 10:41, Sandro Bonazzola wrote: > > They are mounted as block storage ___ Users mailing list -- users@ovirt.org To unsubscribe send an email

[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Konstantin Shalygin
I'm mean not this. BLL.Storage currently remove 'standard' Ceph integration for libvirt and use kernel mounts instead this. Again: removed, because in legacy cinder integration works in qemu process without any mounts. Any plans to add libvirt librbd variant to oVirt back? Thanks, k Sent from

[ovirt-users] Re: LACP across multiple switches

2021-07-27 Thread Konstantin Shalygin
Yes, why not? k Sent from my iPhone > On 27 Jul 2021, at 18:01, Jorge Visentini wrote: > >  > Hi all. > > Is it possible to configure oVirt for work with two NICs in bond/LACP across > two switches, according to the image below? > > > > > Thank you all. > You guys do a wonderful job.

[ovirt-users] Re: Direct Linux kernel/initrd boot

2021-07-27 Thread Konstantin Shalygin
Hi, > On 27 Jul 2021, at 21:24, Chris Adams wrote: > > From a pure user perspective (never looked at the oVirt code for this)... > > Right now, it looks like the way to reference the ISO domain is > iso://. Nothing specifies the domain name (I guess there can > be only one ISO domain). > >