[ovirt-users] oVirt/Ceph iSCSI Issues

2022-11-28 Thread Matthew J Black
Hi All, I've got some issues with connecting my oVirt Cluster to my Ceph Cluster via iSCSI. There are two issues, and I don't know if one is causing the other, if they are related at all, or if they are two separate, unrelated issues. Let me explain. The Situation - - I have a

[ovirt-users] oVirt & (Ceph) iSCSI

2022-09-21 Thread Matthew J Black
Hi Everybody (Hi Dr. Nick), So, next question in my on-going saga: *somewhere* in the documentation I read that when using oVirt with multiple iSCSI paths (in my case, multiple Ceph iSCSI Gateways) we need to set up DM Multipath. My question is: Is this still relevant information when using

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat, 3 is the bare minimum, but yes, it works well, as I said before. But you still have to decide weather you want to have more resiliency for ovirt, and standard NFS is not helping much. If you plan to run your cinder or openstack all in one box as VM in ovirt as well you should consider

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
On Sun, Dec 18, 2016 at 9:31 PM, Alessandro De Salvo < alessandro.desa...@roma1.infn.it> wrote: > Alessandro ​Thank you Alessandro, for all your support if I add one more ovirt-hyp to my setup with same config as h/w will it work for ceph. Regards Rajat​

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat, OK, I see. Well, so just consider that ceph will not work at best in your setup, unless you add at least a physical machine. Same is true for ovirt if you are only using native NFS, as you loose a real HA. Having said this, of course you choose what's best for your site or affordable,

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Alessandro, Right now I dont have cinder running in my setup in case if ceph don't work then I have get one vm running open stack all in one and have all these disk connect my open stack using cinder I can present storage to my ovirt. At the same time I not getting case study for the same.

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi, oh, so you have only 2 physical servers? I've understood they were 3! Well, in this case ceph would not work very well, too few resources and redundancy. You could try a replica 2, but it's not safe. Having a replica 3 could be forced, but you would end up with a server with 2 replicas,

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Hi Alessandro, Right now I have 2 physical server where I have host ovirt these are HP proliant dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. So right now I have use only one disk which 500GB of SAS for my ovirt to run on both server. rest are not in use. At present I am

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Yaniv, > Il giorno 18 dic 2016, alle ore 17:37, Yaniv Kaul ha > scritto: > > > >> On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo >> wrote: >> Hi, >> having a 3-node ceph cluster is the bare minimum you can have to make it >>

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat, sorry but I do not really have a clear picture of your actual setup, can you please explain a bit more? In particular: 1) what to you mean by using 4TB for ovirt? In which machines and how do you make it available to ovirt? 2) how do you plan to use ceph with ovirt? I guess we can

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Yaniv, If I am not wrong your referencing to this https://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/ But only issue here right now this is not add from RH officially, after finish this we will going for RH product. Regards Rajat Hi Regards, Rajat Patel

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Yaniv Kaul
On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo < alessandro.desa...@roma1.infn.it> wrote: > Hi, > having a 3-node ceph cluster is the bare minimum you can have to make it > working, unless you want to have just a replica-2 mode, which is not safe. > How well does it perform? > It's not

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
In fact after reading lot of KB I was thing to run one all in one open stack and use cinder as block storage. Ragards Rajat On Sun, Dec 18, 2016 at 8:33 PM rajatjpatel wrote: > Great, thanks! Alessandro ++ Yaniv ++ > > What I want to use around 4 TB of SAS disk for my

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Great, thanks! Alessandro ++ Yaniv ++ What I want to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 once POC get 100% successful, in fact all product will be RH ) I had done so much duckduckgo for all these solution and use lot of reference from ovirt.org &

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi, sorry, forgot to mention you may have both gluster and ceph on the same machines, as long as you have enough disk space. Cheers, Alessandro > Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul ha > scritto: > > > >> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi, having a 3-node ceph cluster is the bare minimum you can have to make it working, unless you want to have just a replica-2 mode, which is not safe. It's not true that ceph is not easy to configure, you might use very easily ceph-deploy, have puppet configuring it or even run it in

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Yaniv Kaul
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel wrote: > ​Dear Team, > > We are using Ovirt 4.0 for POC what we are doing I want to check with all > Guru's Ovirt. > > We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB > SSD. > > Waht we are done we have

[ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
​Dear Team, We are using Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovirt. We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD. Waht we are done we have install ovirt hyp on these h/w and we have physical server where we are running our manager