[ovirt-users] Re: Migration from deprecated OpenStack provider to cinderlib

2021-05-07 Thread Konstantin Shalygin
Thanks Sandro, wait for Eyal


k

Sent from my iPhone

> On 30 Apr 2021, at 10:03, Sandro Bonazzola  wrote:
> 
> 
> 
>> Il giorno ven 30 apr 2021 alle ore 08:48 Konstantin Shalygin 
>>  ha scritto:
>> Hi Sandro,
>> 
>> The question is - will ovirt plan to provide database migration scripts from 
>> deprecated OpenStack provider to cinderlib? I mean put in survey actual 
>> users and quantity of images in domain
> 
> 
> Moving this question to its own thread.
> +Eyal Shenitzky can you please reply
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PDZ7IQEHHDR77E5AOQHIHP2HN7UGRNP2/


[ovirt-users] Re: oVirt 2021 Spring survey questions

2021-04-30 Thread Konstantin Shalygin
Hi Sandro,

The question is - will ovirt plan to provide database migration scripts from 
deprecated OpenStack provider to cinderlib? I mean put in survey actual users 
and quantity of images in domain


Thanks,
k

Sent from my iPhone
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCNNQPHTJDW6NAO22A5VNW7LAHHT6ZKJ/


[ovirt-users] Re: [ceph-users] osd nearfull is not detected

2021-04-27 Thread Konstantin Shalygin
Create tracker for this issue [1]



[1] https://tracker.ceph.com/issues/50533 

k

> On 21 Apr 2021, at 21:21, Dan van der Ster  > wrote:
> 
> Are you currently doing IO on the relevant pool? Maybe nearfull isn't
> reported until some pgstats are reported.
> 
> Otherwise sorry I haven't seen this.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BHL4NQ3XD5L73BTIAHVYJO6NE5TIP43Z/


[ovirt-users] Re: [ANN] oVirt 4.4.5 Fifth Release Candidate is now available for testing

2021-02-11 Thread Konstantin Shalygin
Is there any plans to fix [1] and [2] in 4.4? After no feedback (from dec 2020) 
from oVirt team I decide to drop oVirt 4.4 engine, and revert to 4.3.
Current cinder integration broken broken in 4.4, but marked for deprecation 
only in 4.5 [3]


Thanks,
k

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1904669 

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1905113 

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 



> On 11 Feb 2021, at 18:24, Lev Veyde  wrote:
> 
> oVirt 4.4.5 Fifth Release Candidate is now available for testing
> 
> The oVirt Project is pleased to announce the availability of oVirt 4.4.5 
> Fifth Release Candidate for testing, as of February 11th, 2021.
> 
> This update is the fifth in a series of stabilization updates to the 4.4 
> series.
> How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
> Note: Upgrading from 4.4.2 GA or later should not require re-doing these 
> steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These are 
> only required to be done once.
> 
> Due to Bug 1837864  - 
> Host enter emergency mode after upgrading to latest build 
> If you have your root file system on a multipath device on your hosts you 
> should be aware that after upgrading from 4.4.1 to 4.4.5 you may get your 
> host entering emergency mode.
> In order to prevent this be sure to upgrade oVirt Engine first, then on your 
> hosts:
> Remove the current lvm filter while still on 4.4.1, or in emergency mode (if 
> rebooted).
> Reboot.
> Upgrade to 4.4.5 (redeploy in case of already being on 4.4.5).
> Run vdsm-tool config-lvm-filter to confirm there is a new filter in place.
> Only if not using oVirt Node:
> - run "dracut --force --add multipath” to rebuild initramfs with the correct 
> filter configuration
> Reboot.
> Documentation
> If you want to try oVirt as quickly as possible, follow the instructions on 
> the Download  page.
> For complete installation, administration, and usage instructions, see the 
> oVirt Documentation .
> For upgrading from a previous version, see the oVirt Upgrade Guide 
> .
> For a general overview of oVirt, see About oVirt 
> .
> Important notes before you try it
> Please note this is a pre-release build.
> The oVirt Project makes no guarantees as to its suitability or usefulness.
> This pre-release must not be used in production.
> Installation instructions
> 
> For installation instructions and additional information please refer to:
> https://ovirt.org/documentation/ 
> This release is available now on x86_64 architecture for:
> * Red Hat Enterprise Linux 8.3 or newer
> * CentOS Linux (or similar) 8.3 or newer
> 
> This release supports Hypervisor Hosts on x86_64 and ppc64le architectures 
> for:
> * Red Hat Enterprise Linux 8.3 or newer
> * CentOS Linux (or similar) 8.3 or newer
> * oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
> 
> See the release notes [1] for installation instructions and a list of new 
> features and bugs fixed.
> 
> Notes:
> - oVirt Appliance is already available for CentOS Linux 8
> - oVirt Node NG is already available for CentOS Linux 8
> - We found a few issues while testing on CentOS Stream so we are still basing 
> oVirt 4.4.5 Node and Appliance on CentOS Linux.
> 
> Additional Resources:
> * Read more about the oVirt 4.4.5 release highlights: 
> http://www.ovirt.org/release/4.4.5/  
> * Get more oVirt project updates on Twitter: https://twitter.com/ovirt 
> 
> * Check out the latest project news on the oVirt blog: 
> http://www.ovirt.org/blog/ 
> 
> [1] http://www.ovirt.org/release/4.4.5/  
> [2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/ 
> 
> 
> -- 
> 
> LEV VEYDE
> SENIOR SOFTWARE ENGINEER, RHCE | RHCVA | MCITP
> Red Hat Israel
> 
>  
> l...@redhat.com  | lve...@redhat.com 
>   
> TRIED. TESTED. TRUSTED. 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OE7TLG7ZF2J7LD7NSNPUAPG3SNFKG3D7/

___
Users 

[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Konstantin Shalygin
Shantur, this is oVirt. You always should make master domain. It’s enough some 
1GB NFS on manager side.


k

> On 22 Jan 2021, at 12:02, Shantur Rathore  wrote:
> 
> Just a bump. Any ideas anyone?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5BNHSC23IQJYFPQ6NOKIEXKCXGIPXJMC/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-21 Thread Konstantin Shalygin
All connection data should be comes from cinderlib, as for current cinder 
integration. Gorka says the same


Thanks,
k

Sent from my iPhone

> On 21 Jan 2021, at 16:54, Nir Soffer  wrote:
> 
> To make this work, engine needs to configure the ceph authentication
> secrets on all hosts in the DC. We have code to do this for old cinder storage
> doman, but it is not used for new cinderlib setup. I'm not sure how easy is to
> use the same mechanism for cinderlib.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FZNC5EDJAQUQDDKBAEEW22QZCGKF6UQF/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2021-01-20 Thread Konstantin Shalygin
I understood, more than the code that works with qemu already exists for 
openstack integration


k

Sent from my iPhone

> On 14 Jan 2021, at 09:43, Gorka Eguileor  wrote:
> 
> If using QEMU to directly connect RBD volumes is the preferred option,
> then that code would have to be added to oVirt and can be done now
> without any cinderlib changes.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4GRZYEOZGTIR5FNAQZIIXCFOPJEGKWSR/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin


> On 19 Jan 2021, at 13:39, Shantur Rathore  wrote:
> 
> I have tested all options but oVirt seems to tick most required boxes.
> 
> OpenStack : Too complex for use case
> Proxmox : Love Ceph support but very basic clustering support
> OpenNebula : Weird VM state machine.
> 
> Not sure if you know that rbd-nbd support is going to be implemented to 
> Cinderlib. I could understand why oVirt wants to support CinderLib and 
> deprecate Cinder support.

Yes, we love oVirt for “that should work like this”, before oVirt 4.4...
Now imagine: you current cluster runned with qemu-rbd and Cinder, now you 
upgrade oVirt and can’t do anything - can’t migrate, your images in another 
oVirt pool, engine-setup can’t migrate current images to MBS - all in “feature 
preview”, older integration broken, then abandoned.


Thanks,
k___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZGDUICDWAPGMVQM6V5K4IRZE46PJ3O6/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Yep, BZ is 

https://bugzilla.redhat.com/show_bug.cgi?id=1539837 

https://bugzilla.redhat.com/show_bug.cgi?id=1904669 

https://bugzilla.redhat.com/show_bug.cgi?id=1905113 


Thanks,
k

> On 19 Jan 2021, at 11:05, Gianluca Cecchi  wrote:
> 
> perhaps a copy paste error about the bugzilla entries? They are the same 
> number...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCYCKFFM2LSZSZZIQX4Q5GEOYDO2I5GU/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you 
wan’t use Ceph Storage.
Current storage team support in oVirt just can break something and do not work 
with this anymore, take a look what I talking about: in [1], [2], [3]


k

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 




> On 19 Jan 2021, at 10:40, Benny Zlotnik  wrote:
> 
> Ceph support is available via Managed Block Storage (tech preview), it
> cannot be used instead of gluster for hyperconverged setups.
> 
> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQG6XHDYZT7WGCHDIUCY55IS7F5G5OVC/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Faster than fuse-rbd, not qemu.
Main issue is kernel pagecache and client upgrades, for example cluster with 
700 osd and 1000 clients we need update client version for new features. With 
current oVirt realization we need update kernel then reboot host. With librbd 
we just need update package and activate host.


k

Sent from my iPhone

> On 18 Jan 2021, at 19:13, Shantur Rathore  wrote:
> 
> Thanks for pointing that out to me Konstantin.
> 
> I understand that it would use a kernel client instead of userland rbd lib.
> Isn't it better as I have seen kernel clients 20x faster than userland??
> 
> I am probably missing something important here, would you mind detailing that.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TL32D27O5GDQZHMUX57IV5FUYFPKWAKZ/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Beware about Ceph and oVirt Managed Block Storage, current integration is only 
possible with kernel, not with qemu-rbd.


k

Sent from my iPhone

> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
> 
> 
> Thanks Strahil for your reply.
> 
> Sorry just to confirm,
> 
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
> 
> Thanks,
> Shantur
> 
>> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
>> wrote:
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>> Hi Strahil,
>>> 
>>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>> 
>>> The reason why Ceph appeals me over Gluster because of the following 
>>> reasons.
>>> 
>>> 1. I have more experience with Ceph than Gluster.
>> That is a good reason to pick CEPH.
>>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>>> software to offload storage related tasks. 
>>> 3. Adding Gluster storage limits to 3 hosts at a time.
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>>> such limitation if I go via Ceph.
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>> In my initial testing I was able to enable Centos repositories in Node Ng 
>>> but if I remember correctly, there were some librbd versions present in 
>>> Node Ng which clashed with the version I was trying to install.
>>> Does Ceph hyperconverge still make sense?
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>> 
>>> Regards
>>> Shantur
>>> 
 On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
 wrote:
 Hi Shantur,
 
 the main question is how many nodes you have.
 Ceph integration is still in development/experimental and it should be 
 wise to consider Gluster also. It has a great integration and it's quite 
 easy to work with).
 
 
 There are users reporting using CEPH with their oVirt , but I can't tell 
 how good it is.
 I doubt that oVirt nodes come with CEPH components , so you most probably 
 will need to use a full-blown distro. In general, using extra software on 
 oVirt nodes is quite hard .
 
 With such setup, you will need much more nodes than a Gluster setup due to 
 CEPH's requirements.
 
 Best Regards,
 Strahil Nikolov
 
 
 
 
 
 
 В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
  написа: 
 
 
 
 
 
 Hi all,
 
 I am planning my new oVirt cluster on Apple hosts. These hosts can only 
 have one disk which I plan to partition and use for hyper converged setup. 
 As this is my first oVirt cluster I need help in understanding few bits.
 
 1. Is Hyper converged setup possible with Ceph using cinderlib?
 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
 Centos?
 3. Can I install cinderlib on oVirt Node Next hosts?
 4. Are there any pit falls in such a setup?
 
 
 Thanks for your help
 
 Regards,
 Shantur
 
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> ___
> Users mailing list 

[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-28 Thread Konstantin Shalygin
Currently integration don't need nbd or krbd. Just qemu process.


k

Sent from my iPhone

> On 28 Dec 2020, at 15:28, Benny Zlotnik  wrote:
> 
> On Tue, Dec 22, 2020 at 6:33 PM Konstantin Shalygin  wrote:
>> 
>> Sandro, FYI we are not against cinderlib integration, more than we are 
>> upgrade 4.3 to 4.4 due movement to cinderlib.
>> 
>> But (!) current Managed Storage Block realization support only krbd (kernel 
>> RBD) driver - it's also not a option, because kernel client is always 
>> lagging behind librbd, and every update\bugfix we should reboot whole host 
>> instead simple migration of all VMs and then migrate it back. Also with krbd 
>> host will be use kernel page cache, and will not be unmounted if VM will 
>> crash (qemu with librbd is one userland process).
>> 
> 
> There was rbd-nbd support at some point in cinderlib[1] which
> addresses your concerns, but it was removed because of some issues
> 
> +Gorka, are there any plans to pick it up again?
> 
> [1] 
> https://github.com/Akrog/cinderlib/commit/a09a7e12fe685d747ed390a59cd42d0acd1399e4
> 
> 
> 
>> So for me current situation look like this:
>> 
>> 1. We update deprecated OpenStack code? Why, Its for delete?.. Nevermind, 
>> just update this code...
>> 
>> 2. Hmm... auth tests doesn't work, to pass test just disable any OpenStack 
>> project_id related things... and... Done...
>> 
>> 3. I don't care how current cinder + qemu code works, just write new one for 
>> linux kernel, it's optimal to use userland apps, just add wrappers (no, it's 
>> not);
>> 
>> 4. Current Cinder integration require zero configuration on oVirt hosts. 
>> It's lazy, why oVirt administrator do nothing? just write manual how-to 
>> install packages - oVirt administrators love anything except "reinstall" 
>> from engine (no, it's not);
>> 
>> 5. We broke old code. New features is "Cinderlib is a Technology Preview 
>> feature only. Technology Preview features are not supported with Red Hat 
>> production service level agreements (SLAs), might not be functionally 
>> complete, and Red Hat does not recommend to use them for production".
>> 
>> 6. Oh, we broke old code. Let's deprecate them and close PRODUCTION issues 
>> (we didn't see anything).
>> 
>> 
>> And again, we are not hate new cinderlib integration. We just want that new 
>> technology don't break all PRODUCTION clustes. Almost two years ago I write 
>> on this issue https://bugzilla.redhat.com/show_bug.cgi?id=1539837#c6 about 
>> "before deprecate, let's help to migrate". For now I see that oVirt totally 
>> will disable QEMU RBD support and want to use kernel RBD module + python 
>> os-brick + userland mappers + shell wrappers.
>> 
>> 
>> Thanks, I hope I am writing this for a reason and it will help build bridges 
>> between the community and the developers. We have been with oVirt for almost 
>> 10 years and now it is a crossroads towards a different virtualization 
>> manager.
>> 
>> k
>> 
>> 
>> So I see only regressions for now, hope we'll found some code owner who can 
>> catch this oVirt 4.4 only bugs.
>> 
> 
> I looked at the bugs and I see you've already identified the problem
> and have patches attached, if you can submit the patches and verify
> them perhaps we can merge the fixes
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/E7QTTECXLUD6LIEE36FBRJ3JSOQO27DP/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMFRPECJQP325MBR3VSBUABWDU7Z2TIQ/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-22 Thread Konstantin Shalygin
Sandro, FYI we are not against cinderlib integration, more than we are 
upgrade 4.3 to 4.4 due movement to cinderlib.


But (!) current Managed Storage Block realization support only krbd 
(kernel RBD) driver - it's also not a option, because kernel client is 
always lagging behind librbd, and every update\bugfix we should *reboot 
whole host* instead simple migration of all VMs and then migrate it 
back. Also with krbd host will be use kernel page cache, and will not be 
unmounted if VM will crash (qemu with librbd is one userland process).


So for me current situation look like this:

1. We update deprecated OpenStack code? Why, Its for delete?.. 
Nevermind, just update this code...


2. Hmm... auth tests doesn't work, to pass test just disable any 
OpenStack project_id related things... and... Done...


3. I don't care how current cinder + qemu code works, just write new one 
for linux kernel, it's optimal to use userland apps, just add wrappers 
(no, it's not);


4. Current Cinder integration require zero configuration on oVirt hosts. 
It's lazy, why oVirt administrator do nothing? just write manual how-to 
install packages - oVirt administrators love anything except "reinstall" 
from engine (no, it's not);


5. We broke old code. New features is "Cinderlib is a Technology Preview 
feature only. Technology Preview features are not supported with Red Hat 
production service level agreements (SLAs), might not be functionally 
complete, and Red Hat does not recommend to use them for production".


6. Oh, we broke old code. Let's deprecate them and close PRODUCTION 
issues (we didn't see anything).



And again, we are not hate new cinderlib integration. We just want that 
new technology don't break all PRODUCTION clustes. Almost two years ago 
I write on this issue 
https://bugzilla.redhat.com/show_bug.cgi?id=1539837#c6 about "before 
deprecate, let's help to migrate". For now I see that oVirt totally will 
disable QEMU RBD support and want to use kernel RBD module + python 
os-brick + userland mappers + shell wrappers.



Thanks, I hope I am writing this for a reason and it will help build 
bridges between the community and the developers. We have been with 
oVirt for almost 10 years and now it is a crossroads towards a different 
virtualization manager.


k


So I see only regressions for now, hope we'll found some code owner who 
can catch this oVirt 4.4 only bugs.


On 22.12.2020 12:01, Sandro Bonazzola wrote:



Il giorno lun 21 dic 2020 alle ore 18:33 Konstantin Shalygin 
mailto:k0...@k0ste.ru>> ha scritto:


Sandro, after my mention my two bugs was closed as deprecated
feature of "old Cinder integration". But actually no one oVirt 4.4
doc mentioned about deprecations/cautions/warnings.


Indeed, documentation is not aligned with +Eyal Shenitzky 
<mailto:eshen...@redhat.com> 's comments on the bugs.
A proper deprecation bug should have been opened and documentation 
should have been properly updated to clearly mark the feature as 
deprecated.
Also the new implementation of cinderlib is not properly documented in 
oVirt Install Guide, I'll try to get it updated today.


How do you think, as manager of project, it's okay to just broke
working code due loose tests and then deprecate it just by wave a
hand?路‍♂️


I'll let storage team lead to reply to this specific question. I can 
only agree this has not been properly handled.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ULTVTH5CLQPGFABXKXF2ZBXKKLGMC42T/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-21 Thread Konstantin Shalygin
Sandro, after my mention my two bugs was closed as deprecated feature of "old 
Cinder integration". But actually no one oVirt 4.4 doc mentioned about 
deprecations/cautions/warnings. How do you think, as manager of project, it's 
okay to just broke working code due loose tests and then deprecate it just by 
wave a hand?路‍♂️

Thanks,
k


Sent from my iPhone

> On 21 Dec 2020, at 18:09, Sandro Bonazzola  wrote:
> 
> 
> 
> 
>> Il giorno lun 21 dic 2020 alle ore 15:57 Konstantin Shalygin 
>>  ha scritto:
>> On 21.12.2020 16:22, Sandro Bonazzola wrote:
>>>  The oVirt project is excited to announce the general availability of oVirt 
>>> 4.4.4 , as of December 21st, 2020.
>> Sandro, is any plans to fix for OpenStack provider regressions for 4.4 
>> release?
>> 
>> 
>> 
> 
> I see you opened two bugs about it:
> Bug 1905113 - OpenStack Block Storage Provider (Cinder) regression: oVirt 4.4 
> Disk resize broken
> Bug 1904669 - oVirt 4.3 -> 4.4 production upgrade: OpenStack Block Storage 
> Provider (Cinder) regression
> 
> Please consider most of the developers are going to be on vacation due to the 
> upcoming holidays.
> I think storage team is looking into this but I see above bugs have not been 
> targeted yet so a deeper investigation may be needed.
> 
>  
>> Thanks,
>> 
>> k
>> 
> 
> 
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
> Red Hat EMEA
> sbona...@redhat.com   
>   
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GYAVGLTXHRGK27LWCGDAVFQGAIZMY2FC/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JVWZRHP65MY7WLZSWXBQFTAZCMEXWG3P/


[ovirt-users] Re: [ANN] oVirt 4.4.4 is now generally available

2020-12-21 Thread Konstantin Shalygin

On 21.12.2020 16:22, Sandro Bonazzola wrote:


The oVirt project is excited to announce the general availability of 
oVirt 4.4.4 , as of December 21st, 2020.


Sandro, is any plans to fix for OpenStack provider regressions for 4.4 
release?



Thanks,

k

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FURWGZFCHWDZTOCJHS6PKXPTOCNEK6SK/


[ovirt-users] [urgent] oVirt 4.3 -> 4.4 production upgrade: OpenStack Provider regression

2020-12-05 Thread Konstantin Shalygin

Create ticket for this: https://bugzilla.redhat.com/show_bug.cgi?id=1904669

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SRISU7FBUKMC4ELMDU7ORTDO7JZXQKBA/


[ovirt-users] [urgent] oVirt 4.3 -> 4.4 production upgrade: OpenStack Provider regression

2020-12-04 Thread Konstantin Shalygin

Hello.

I was upgrade our ovirt-engine from 4.3 to latest 4.4.3.12. Seems all 
flawless, except our storage domain (Cinder).


Currently our clusters can't start VM, create Disks, resize disks, etc. 
Only migration works.


The root cause: ovirt missing project_id in API call:

ovirt 4.3 call:

17910:2020-12-02 20:43:32.087 1949 INFO eventlet.wsgi.server 
[req-7267b4fd-9659-4380-9297-4582ece3fe23 - - - - -] 192.168.101.10 
"POST 
/v2/07f5bf3f6dc64b85988c3779654e175e/volumes/e3df2f84-2206-4165-9001-bcace8613315/action 
HTTP/1.1" status: 200  len: 777 time: 0.4810839


ovirt 4.4 call:

2020-12-05 14:09:24.155 2031 INFO eventlet.wsgi.server 
[req-71b47b34-8ed7-410a-812f-8f662f9f4037 - - - - -] 192.168.101.10 
"POST /v2/volumes/e3df2f84-2206-4165-9001-bcace8613315/action HTTP/1.1" 
status: 404  len: 333 time: 0.2505009



provider configuration in engine database:

engine=# select * from providers where name='cinder_ceph_backened';
-[ RECORD 1 
]-+-

id    | 9c5800e6-5a88-403a-9c57-a501714fe816
name  | cinder_ceph_backened
description   | OpenStack Cinder with Ceph Backend
url   | http://192.168.101.20:8776
provider_type | OPENSTACK_VOLUME
auth_required | t
auth_username | admin
auth_password | 
OUDZtXsI4eOT69UKYI2DNqTFmN3c08XNAbbi3PQHq2Np319yURcIjhOJ81lKUo+T+pa/e6d5XUbPZmwulCK21fU5UrY2uJBSg8GXaVH23os7BmZzx+7V0V82LLBFYWUAAACeXY0hu9UGgQiMd0L7wPS0hU23iSib/BWnCcxY6h4ooQ0/pfKNZ10so5tKin/mAgMHNmX2YtqiYaQgZTYpDcIf9JnfqsiJKUW3xekPzTJQCIUEDbX/1Jpp5sJCW5aFHDSiy1I9CU/etAcqrzf6JMN8Mfn6X4VZjXqrg4YQ+QD6TiTwOAS7u7oJwCYopdRHvGNspc2YbPykN62NgFWwmg==

_create_date  | 2017-04-05 19:30:57.045699+07
_update_date  | 2020-12-05 13:27:31.320646+07
custom_properties |
tenant_name   | admin
plugin_type   |
auth_url  | http://192.168.101.20:5000/v2.0
additional_properties |
read_only | f
is_unmanaged  | f
auto_sync | f
user_domain_name  |
project_name  |
project_domain_name   |


Now we miss some field or bug in code where project_id (admin = 
07f5bf3f6dc64b85988c3779654e175e) is not concat to uri?




Thanks,

k

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/COZT6KAJUKZV7YPXGMR2FCE3QO5PN5EF/


[ovirt-users] Re: [ANN] oVirt 4.3.7 First Release Candidate is now available for testing

2019-10-28 Thread Konstantin Shalygin
The oVirt Project is pleased to announce the availability of the oVirt 
4.3.7 First Release Candidate for testing, as of October 18th, 2019.


Sandro, thanks for announce. oVirt 4.3 still have "OpenStack Block 
Storage" provider support?


Because we want upgrade our oVirt 4.2.8 DC's and we use Cinder for Ceph 
storages. But I don't see any "migration from Cinder Provider to 
Cinderlib" docs.




Thanks,

k
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TM7J5JQLINAOJ27Y2BBQEYWM5EAZMCYV/


[ovirt-users] How-to create migration network between 3 hosts without switch?

2019-06-18 Thread Konstantin Shalygin

Hi oVirters,

I have a network topology: host with two NIC's ports - each host 
connected with two another hosts.


    host2

  / \

host1  /     \

   \  /

 \  /

        host3


oVirt can't setup multiple migration networks for cluster. May be 
possible avoid this or setup some virtual device in shell (like vxvlan)

with which oVirt can work?

Why we can't use switch at this time? This is 100G "only for migration 
network" (huge VM's 256GB RAM+) and witch switch price for this setup 
increased from $3500 to $19000.



Thanks,

k

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NIXUHDYZDGCOPYGRCZVP65TG7ERXKS4J/


Re: [ovirt-users] Ceph Cinder QoS

2018-03-14 Thread Konstantin Shalygin

has someone experienced the same problem?
Is there someone who have a working cinder qos?



How exactly? Storage profiles is not present for external providers - so 
lack of this feature.


For now only way to do that is vdsm hook.


https://bugzilla.redhat.com/show_bug.cgi?id=1550145




k

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 with cheph

2018-02-19 Thread Konstantin Shalygin

Hello,

does someone have experience with cephfs as a vm-storage domain? I think
about that but without any hints...

Thanks for pointing me...



This is bad idea. Use rbd - this is interface for VM, cephfs is for 
different things.




k

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Konstantin Shalygin

if I can
just have Ceph...I would be a very happy sys admin!


What is stopped you from start use ceph via librbd NOW? All you need is 
a OpenStack Cinder as volume manager wrapper.


You can check librbd version of your hosts via oVirt manager (see 
attached sceenshot).




I read in RHV 4.2 Beta release note that CEPH will be supported using iSCSI.
I have tried to check community documentation regarding CEPH support but
there was no luck. Do we have such document?


This is just usual iSCSI

http://docs.ceph.com/docs/master/rbd/iscsi-overview/



k

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-18 Thread Konstantin Shalygin

On 12/18/2017 09:02 PM, Yaniv Kaul wrote:


We provide the required scripts to install OpenShift with the EFK 
stack, configure it and the hosts with all relevant details to connect 
the two.

Note that the metrics store also processes the engine and VDSM logs.


Good to know. But if I still want to use netdata instead EFK? Some info 
about how to get blk io metrics from libvirt and/or how to enable 
cgroups metrics. Anyone?


Thanks.

k
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-18 Thread Konstantin Shalygin

On 12/18/2017 07:58 PM, Yaniv Kaul wrote:


Indeed. 4.2 provides a comprehensive solution, with integration via 
Collectd -> fluentd -> Elastic -> Kibana.

Y.


E.g. integrated to oVirt or "admin can send metrics to ELK"?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-17 Thread Konstantin Shalygin

Specifically for IO statistics, VDSM reads the values from libvirt[1].
cgroup limiting is possible if you define it, but is unrelated.
Also note that 7.3 is a bit ancient, I'm not sure how supported it is with
latest 4.1 - which I'm sure will pull new dependencies from 7.4 (for
example, libvirt!).

We use oVirt 4.1.6 on 7.4 of course.

Where I can see IO stat? I never see this on oVirt manager.

How can I enable cgroup blk metric collection?
Perhaps this is an outdated way and metrics should be collected in a 
different way and it should be applied in the netdata project?


Thanks.


k

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Empty cgroup files on centos 7.3 host

2017-12-17 Thread Konstantin Shalygin




I thought, I can get my needed values from there, but all files are empty.

Looking at this 
post:http://lists.ovirt.org/pipermail/users/2017-January/079011.html
this should work.

Is this normal on centos 7.3 with oVirt installed? How can I get those values, 
without monitoring all VMs directly?

oVirt Version we use:
4.1.1.8-1.el7.centos
Hi Florian. You find answer on this? Today released netdata 1.9.0. New 
feature is disk i/o and network metrics per VM - and all of this not 
work with oVirt from bootstrap. I created issue (netdata #3144 
) and network metrics is 
winned. But disk metrics is just empty - as you sayed before (oVirt 
release 4.1.7).



k

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] "Enable Discard" for Ceph/Cinder Disks?

2017-11-27 Thread Konstantin Shalygin

according tohttp://docs.ceph.com/docs/luminous/rbd/qemu-rbd/  the use of
Discard/TRIM for Ceph RBD disks is possible. Openstack seems to have
implemented it
(https://www.sebastien-han.fr/blog/2015/02/02/openstack-and-ceph-rbd-discard/).
In oVirt there is no option "Enable Discard" for Cinder Disks (when
choosing IDE or VirtIO-SCSI driver), even when i set
"report_discard_supported = true" in Cinder. Are there plans for
supporting this in the future? Can i use it right now with custom
properties (never tried this before)?

Hello Matthias.
ovirt-engine do not have this option. Earlier I created issue 
https://bugzilla.redhat.com/show_bug.cgi?id=1440230
But you can use unmap with vdsm-hook-diskunmap 
.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] using oVirt with newer librbd1

2017-11-18 Thread Konstantin Shalygin

we're also using cinder from openstack ocata release.

the point is
a) we didn't upgrade, but started from scratch with ceph 12
b) we didn't test all of the new features in ceph 12 (eg. EC pools for
RBD devices) in connection with cinder yet

We are live on librbd1-12.2.1 for a week. All works okay.
I was upgraded ceph from 11.2.0 to 11.2.1. Not Luminous, because seems 
12.2.1 is stable only when cluster started from Luminous 
(http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022522.html).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] using oVirt with newer librbd1

2017-10-25 Thread Konstantin Shalygin

On 10/25/2017 03:30 PM, Matthias Leopold wrote:

we're also using cinder from openstack ocata release.

the point is
a) we didn't upgrade, but started from scratch with ceph 12
b) we didn't test all of the new features in ceph 12 (eg. EC pools for 
RBD devices) in connection with cinder yet 


Thanks. We use EC pools with replication pull cache - only one way to 
use EC with rbd, before Ceph 12.
We are half year on Ceph with oVirt in production. The best storage 
experience, the only thing you can find fault this is impossible to move 
images between pools. Only manually migration with qemu-img/rados or 
cp/rsync inside VM.


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] using oVirt with newer librbd1

2017-10-24 Thread Konstantin Shalygin

On 10/24/2017 07:26 PM, Matthias Leopold wrote:

yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt 
Hypervisor Hosts, which we're installed with CentOS 7 and Ceph 
upstream repos, not oVirt Node (for this exact purpose).

On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64
Since 
/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is 
using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain 
are using Ceph 12 all the way. 
Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with 
librbd1-10.2.3-0.el7.x86_64
What version of Cinder I should have for work with Ceph 12? Or just 
upgrade python-rbd/librados/librbd1/etc.
Are you also using a newer librbd1? 
Not for now as you can see. I was open "ovirt-users" for my questions 
about Ceph 12 and see your fresh message. I think you first who used 
Ceph 12 with oVirt.


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] using oVirt with newer librbd1

2017-10-24 Thread Konstantin Shalygin

we want to use a Ceph cluster as the main storage for our oVirt 4.1.x
datacenter. We successfully tested using librbd1-12.2.1-0.el7 package
from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS
7 in an oVirt virtualization node. Are there any caveats when doing so?
Will this work in oVirt 4.2?


Hello Matthias. Can I ask separate question?
At this time we atoVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken).In 
few weeks I planned to expand the cluster and I would like to upgrade to 
Ceph 12 (Luminous), for bluestore support.

So my question is:have you tested oVirt with Ceph 12?


Thanks.

--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to change network card configuration under bridge on host?

2017-10-13 Thread Konstantin Shalygin

Yet I suspect
if I change ifcfg-eno1 and ifcfg-eno2 by hand, they will just get
replaced at the next reboot by ovirt.
Just disable in BIOS your integrated nic, and add udev rules (for new 
nic), so new nic replace old nic 1:1.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-09 Thread Konstantin Shalygin
If you just need Cinder (for example for use Ceph with oVirt), and not a 
docker container then try to use RDO project.
A few month ago I was start from this images, then switched to RDO and 
setup a VM on host with oVirt manager. Still works flawless.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin
Red Hat Virtualization Host (RHVH) is a minimal operating system based 
on Red Hat Enterprise Linux that is designed to provide a simple 
method for setting up a physical machine to act as a hypervisor in a 
Red Hat Virtualization environment. The minimal operating system 
contains only the packages required for the machine to act as a 
hypervisor, and features a Cockpit user interface for monitoring the 
host and performing administrative tasks.


I as Administrator self-know what packages is required for my hardware 
and I don't need Cockpit. So CentOS minimal is my choice.



On 07/04/2017 11:36 AM, Vinícius Ferrão wrote:

It’s the hypervisor appliance, just like RHVH.


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

Not for hosted engine, with ovirt-engine of course.


On 07/04/2017 11:27 AM, Yaniv Kaul wrote:

How are you using Ceph for hosted engine?


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

I don't know what is oVirt Node :)

And for "generic_linux" I have 95% automation (work in progress).


On 07/04/2017 11:20 AM, Vinícius Ferrão wrote:
Just abusing a little more, why you use CentOS instead of oVirt Node? 
What’s the reason behind this choice?


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

Yes, I do deployment in four steps:

1. Install CentOS via iDRAC.
2. Attach vlan to 10G physdev via iproute. This is one handwork. May be 
replaced via DHCP management, but for now I only have 2x10G Fiber, 
without any DHCP.

3. Run ovirt_deploy Ansible role.
4. Attach oVirt networks after host activate.

About iSCSI, NFS. I don't know anything about it. I use Ceph.


On 07/04/2017 10:50 AM, Vinícius Ferrão wrote:

Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on classic 
eth interfaces and later after the deployment of Hosted Engine I can 
convert the "ovirtmgmt" network to a LACP Bond, right?


Another question: what about iSCSI Multipath on Self Hosted Engine? 
I've looked through the net and only found this issue: 
https://bugzilla.redhat.com/show_bug.cgi?id=1193961


Appears to be unsupported as today, but there's an workaround on the 
comments. It's safe to deploy this way? Should I use NFS instead?


--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-03 Thread Konstantin Shalygin

Hello,

I’m deploying oVirt for the first time and a question has emerged: what is the 
good practice to enable LACP on oVirt Node? Should I create 802.3ad bond during 
the oVirt Node installation in Anaconda, or it should be done in a posterior 
moment inside the Hosted Engine manager?

In my deployment we have 4x GbE interfaces. eth0 and eth1 will be a LACP bond 
for management and servers VLAN’s, while eth1 and eth2 are Multipath iSCSI 
disks (MPIO).

Thanks,
V.


Do all your network settings in ovirt-engine webadmin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] LACP Bonding issue

2017-04-20 Thread Konstantin Shalygin

You should configure your LAG with this options (custom mode on oVirt):

mode=4 miimon=100 xmit_hash_policy=2 lacp_rate=1

An tell to your network admin configure switch:
"Give me lacp timeout short with channel-group mode active. Also set 
port-channel load-balance src-dst-mac-ip (or src-dst-ip\src-dst-mac)".


You also need to understand that LACP balancing works 'per flow'. You 
can take 2 hosts and run "iperf -c xxx.xxx.xxx.xxx -i 0.1 -d",
and on one phy interface you should see 1Gb RX, and on another phy 
interface 1Gb TX.



Hi,

I discovered an issue with my LACP configuration and i am having trouble
figuring it out.  I am running 2 Dell Powered 610's with 4 broadcomm nics.
I am trying to bond them together, however only one of the nics goes active
no mater how much traffic i push across the links.

I have spoken to my network admin, and says that the switch ports are
configured and can only see one active link on the switch.

Thanks
Bryan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Konstantin Shalygin

We have many issues migration with 1G and 18-25 vms.

Is slow, stuck, failed. Switched to 10G and set migration limit to 
5000Mbps (actually this is don't work, but if don't set this field, 
limit is 1000Mbps!) - 25vms migrate ~ 30seconds in total.



On 04/19/2017 07:41 PM, Nelson Lameiras wrote:

1000 Mbps full duplex


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-19 Thread Konstantin Shalygin

I mean what is your hardware? 1G? 40G?


On 04/19/2017 04:16 PM, Nelson Lameiras wrote:

I'm using ovirtmgmt network for migrations.

I'm guetting the vibe that using a dedicated network for migration is "good 
practice"...

cordialement, regards,


Nelson LAMEIRAS
Ingénieur Systèmes et Réseaux / Systems and Networks engineer
Tel: +33 5 32 09 09 70
nelson.lamei...@lyra-network.com

www.lyra-network.com | www.payzen.eu





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE

- Original Message -----
From: "Konstantin Shalygin" <k0...@k0ste.ru>
To: users@ovirt.org, "Nelson Lameiras" <nelson.lamei...@lyra-network.com>
Sent: Wednesday, April 19, 2017 3:15:29 AM
Subject: Re: Re: [ovirt-users] massive simultaneous vms migrations ?

Hello.

What is your Migration Network?



We have some hosts that have 60 vms. So this will create a 60 vms migrating 
simultaneously.
Some vms are under so much heavy loads that migration fails often (our guess is that 
massive simultaneous migrations does not help migration convergence) - even with 
"suspend workload if needed" migraton policy.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] massive simultaneous vms migrations ?

2017-04-18 Thread Konstantin Shalygin

Hello.

What is your Migration Network?



We have some hosts that have 60 vms. So this will create a 60 vms migrating 
simultaneously.
Some vms are under so much heavy loads that migration fails often (our guess is that 
massive simultaneous migrations does not help migration convergence) - even with 
"suspend workload if needed" migraton policy.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt, Cinder, Ceph in 2017

2017-03-22 Thread Konstantin Shalygin

Hello.

Try to use this guide Cinder and Glance integration 
 
- this is actually on 2017? I try to use it and get this error on oVirt 
4.1 (CentOS 7.3):


2017-03-22 11:34:19 INFO otopi.plugins.ovirt_engine_setup.dockerc.config 
config._misc_deploy:357 Creating rabbitmq
2017-03-22 11:34:19 INFO otopi.plugins.ovirt_engine_setup.dockerc.config 
config._misc_deploy:397 Starting rabbitmq
2017-03-22 11:34:19 DEBUG 
otopi.plugins.ovirt_engine_setup.dockerc.config config._misc_deploy:402 
Container rabbitmq: 
da8d020b19010f0a7f1f6ce19977791c20b0f2eabd562578be9e06fd4e116172
2017-03-22 11:34:19 DEBUG otopi.context context._executeMethod:142 
method exception

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, 
in _executeMethod

method['method']()
  File 
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/dockerc/config.py", 
line 434, in _misc_deploy

raise ex
APIError: 400 Client Error: Bad Request ("{"message":"starting container 
with HostConfig was deprecated since v1.10 and removed in v1.12"}")
2017-03-22 11:34:19 ERROR otopi.context context._executeMethod:151 
Failed to execute stage 'Misc configuration': 400 Client Error: Bad 
Request ("{"message":"starting container with HostConfig was deprecated 
since v1.10 and removed in v1.12"}")
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'Yum Transaction'
2017-03-22 11:34:19 INFO otopi.plugins.otopi.packagers.yumpackager 
yumpackager.info:80 Yum Performing yum transaction rollback

Loaded plugins: fastestmirror, versionlock
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'DWH Engine database Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'Database Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'Version Lock Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'DWH database Transaction'
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for '/etc/ovirt-engine/firewalld/ovirt-http.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-https.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-vmconsole-proxy.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-local-cinder.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-local-glance.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-imageio-proxy.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-websocket-proxy.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-fence-kdump-listener.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-postgres.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for 
'/etc/ovirt-engine/firewalld/ovirt-postgres.xml''
2017-03-22 11:34:19 DEBUG otopi.transaction transaction.abort:119 
aborting 'File transaction for '/etc/ovirt-engine/iptables.example''
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:760 
ENVIRONMENT DUMP - BEGIN
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/error=bool:'True'
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:770 ENV 
BASE/exceptionInfo=list:'[(, 
APIError(HTTPError('400 Client Error: Bad Request',),), object at 0x2923fc8>)]'
2017-03-22 11:34:19 DEBUG otopi.context context.dumpEnvironment:774 
ENVIRONMENT DUMP - END


Okay, back to 2016, make rpm's (and deps):

python-docker-py-1.9.0-1
docker-1.10.3-59

I Installed it, run engine-setup. Now "Deploy Cinder container on this 
host" & "Deploy Glance container on this host" are success. On oVirt 
Engine Web admin I see external provider 
"local-glance-image-repository". But actually I can't use them. Then I 
figure out this is unsupported method?


May be I should going another way for "oVirt with Ceph"? Like VM with 
OpenStack stuff deployed with Ansible or another 
ready_to_go_docker_containers because I need it faster, therefore I want 
to go this way "makes it more easier to be accomplished without the need 
for a manual complex configuration".


Thanks.
___
Users mailing list

Re: [ovirt-users] I wrote an oVirt thing

2016-11-29 Thread Konstantin Shalygin
Use case - Explain what is Virtual Machine to accountant or stockman - 
beyond my powers. But they understand what is Remote Desktop, and how to 
"Start menu->Programs->Remote Work->password->Enter".



In this talk I like the fact that I learned about deprecation this 
packet and need to start wrote on another library instead call sub-process.


About security reasons: is acceptable for our company. For who don't 
need this patch can easy disable it. This is not package[1], only 
playbook for build this.



[1] https://wiki.archlinux.org/index.php/Arch_User_Repository

On 11/29/2016 07:06 PM, Yaniv Kaul wrote:



On Tue, Nov 29, 2016 at 3:40 AM, Konstantin Shalygin <k0...@k0ste.ru 
<mailto:k0...@k0ste.ru>> wrote:


ovirt-shell will be deprecated and not supported or some functions
on ovirt-shell (or all package ovirt-engine-cli)?

We use ovirt-shell on client desktops who connected to SPICE
consoles and work (users provided by LDAP on ovirt-engine), like
via RDP. For this I wrote very fast-hack patch for ovirt-shell and
GUI for enter password (https://github.com/k0ste/ovirt-pygtk
<https://github.com/k0ste/ovirt-pygtk>). Very simple, but via
Internet people use SPICE without negative about packet loss and
disconnects, instead RDP.


Can you further explain the use case? I assume the user portal is not 
good enough for some reason?




BTW, the ovirt-shell is something we deprecated. It is working
on top of
the v3 api, which we plan to remove in 4.2.
So better not use it.



You can start maintain. For example I maintain packes for Arch
Linux: ovirt-engine-cli
(https://aur.archlinux.org/packages/ovirt-engine-cli
<https://aur.archlinux.org/packages/ovirt-engine-cli>) and
ovirt-engine-sdk-python
(https://aur.archlinux.org/packages/ovirt-engine-sdk-python
<https://aur.archlinux.org/packages/ovirt-engine-sdk-python>).


Hi,

It somehow looks like a fork of the CLI (due to the added patch[1]).
I'm not sure how happy I am about it, considering the patch is adding 
a feature with security issues (there is a reason we do not support 
password passed via the command line - it's somewhat less secure).
Since you are already checking for the CLI rc file[2], just add the 
password to it and launch with it (in a temp file in the temp 
directory with the right permissions, etc...)


BTW, note that the attempt to delete the password from memory[3] may 
or may not work. After all, it's a copy of what you got 
from entry.get_text() few lines before.
And Python GC is not really to be relied upon to delete things ASAP 
anyway. There are some lovely discussions on the Internet about it. 
For example[4].

Y.

[1] 
https://github.com/k0ste/ovirt-pygtk/blob/master/add_password_option.patch

[2] https://github.com/k0ste/ovirt-pygtk/blob/master/ovirt-pygtk.py#L81
[3] https://github.com/k0ste/ovirt-pygtk/blob/master/ovirt-pygtk.py#L71
[4] 
http://stackoverflow.com/questions/728164/securely-erasing-password-in-memory-python




  My workstation at work is running Ubuntu, and I do not
believe that ovirt-shell is packaged for it.


-- 
Best regards,

Konstantin Shalygin



___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--
Best regards,
Konstantin Shalygin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] I wrote an oVirt thing

2016-11-28 Thread Konstantin Shalygin
ovirt-shell will be deprecated and not supported or some functions on 
ovirt-shell (or all package ovirt-engine-cli)?


We use ovirt-shell on client desktops who connected to SPICE consoles 
and work (users provided by LDAP on ovirt-engine), like via RDP. For 
this I wrote very fast-hack patch for ovirt-shell and GUI for enter 
password (https://github.com/k0ste/ovirt-pygtk). Very simple, but via 
Internet people use SPICE without negative about packet loss and 
disconnects, instead RDP.



BTW, the ovirt-shell is something we deprecated. It is working on top of
the v3 api, which we plan to remove in 4.2.
So better not use it.



You can start maintain. For example I maintain packes for Arch Linux: 
ovirt-engine-cli (https://aur.archlinux.org/packages/ovirt-engine-cli) 
and ovirt-engine-sdk-python 
(https://aur.archlinux.org/packages/ovirt-engine-sdk-python).



  My workstation at work is running Ubuntu, and I do not believe that 
ovirt-shell is packaged for it.


--
Best regards,
Konstantin Shalygin


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users