[ovirt-users] oVirt/Ceph iSCSI Issues

2022-11-28 Thread Matthew J Black
Hi All,

I've got some issues with connecting my oVirt Cluster to my Ceph Cluster via 
iSCSI. There are two issues, and I don't know if one is causing the other, if 
they are related at all, or if they are two separate, unrelated issues. Let me 
explain.

The Situation
-
- I have a working three node Ceph Cluster (Ceph Quincy on Rocky Linux 8.6)
- The Ceph Cluster has four Storage Pools of between 4 and 8 TB each
- The Ceph Cluster has three iSCSI Gateways
- There is a single iSCSI Target on the Ceph Cluster
- The iSCSI Target has all three iSCSI Gateways attached
- The iSCSI Target has all four Storage Pools attached
- The four Storage Pools have been assigned LUNs 0-3
- I have set up (Discovery) CHAP Authorisation on the iSCSI Target
- I have a working three node self-hosted oVirt Cluster (oVirt v4.5.3 on Rocky 
Linux 8.6)
- The oVirt Cluster has (in addition to the hosted_storage Storage Domain) 
three GlusterFS Storage Domains
- I can ping all three Ceph Cluster Nodes to/from all three oVirt Hosts
- The iSCSI Target on the Ceph Cluster has all three oVirt Hosts Initiators 
attached
- Each Initiator has all four Ceph Storage Pools attached
- I have set up CHAP Authorisation on the iSCSI Target's Initiators
- The Ceph Cluster Admin Portal reports that all three Initiators are 
"logged_in"
- I have previous connected Ceph iSCSI LUNs to the oVirt Cluster successfully 
(as an experiment), but had to remove and re-instate them for the "final" 
version(?).
- The oVirt Admin Portal (ie HostedEngine) reports that Initiators are 1 & 2 
(ie oVirt Hosts 1 & 2) are "logged_in" to all three iSCSI Gateways
- The oVirt Admin Portal reports that Initiator 3 (ie oVirt Host 3) is 
"logged_in" to iSCSI Gateways 1 & 2
- I can "force" Initiator 3 to become "logged_in" to iSCSI Gateway 3, but when 
I do this it is *not* persistent
- oVirt Hosts 1 & 2 can/have discovered all three iSCSI Gateways
- oVirt Hosts 1 & 2 can/have discovered all four LUNs/Targets on all three 
iSCSI Gateways
- oVirt Host 3 can only discover 2 of the iSCSI Gateways
- For Target/LUN 0 oVirt Host 3 can only "see" the LUN provided by iSCSI 
Gateway 1
- For Targets/LUNs 1-3 oVirt Host 3 can only "see" the LUNs provided by iSCSI 
Gateways 1 & 2
- oVirt Host 3 can *not* "see" any of the Targets/LUNs provided by iSCSI 
Gateway 3
- When I create a new oVirt Storage Domain for any of the four LUNs:
  - I am presented with a message saying "The following LUNs are already in 
use..."
  - I am asked to "Approve operation" via a checkbox, which I do
  - As I watch the oVirt Admin Portal I can see the new iSCSI Storage Domain 
appear in the Storage Domain list, and then after a few minutes it is removed
  - After those few minutes I am presented with this failure message: "Error 
while executing action New SAN Storage Domain: Network error during 
communication with the Host."
- I have looked in the engine.log and all I could find that was relevant (as 
far as I know) was this:
~~~
2022-11-28 19:59:20,506+11 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] 
(default task-1) [77b0c12d] Command 'CreateStorageDomainVDSCommand(HostName = 
ovirt_node_1.mynet.local, 
CreateStorageDomainVDSCommandParameters:{hostId='967301de-be9f-472a-8e66-03c24f01fa71',
 storageDomain='StorageDomainStatic:{name='data', 
id='2a14e4bd-c273-40a0-9791-6d683d145558'}', 
args='s0OGKR-80PH-KVPX-Fi1q-M3e4-Jsh7-gv337P'})' execution failed: 
VDSGenericException: VDSNetworkException: Message timeout which can be caused 
by communication issues

2022-11-28 19:59:20,507+11 ERROR 
[org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand] (default 
task-1) [77b0c12d] Command 
'org.ovirt.engine.core.bll.storage.domain.AddSANStorageDomainCommand' failed: 
EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
VDSGenericException: VDSNetworkException: Message timeout which can be caused 
by communication issues (Failed with error VDS_NETWORK_ERROR and code 5022)
~~~

I cannot see/detect any "communication issue" - but then again I'm not 100% 
sure what I should be looking for

I have looked on-line for an answer, and apart from not being able to get past 
Red Hat's "wall" to see the solutions that they have, all I could find that was 
relevant was this: 
https://lists.ovirt.org/archives/list/de...@ovirt.org/thread/AVLORQNOLJHRWMHTM4WCDRVP7VSIZBGR/
 . If this *is* relevant then there is not enough context here for me to 
proceed (ie/eg *where* (which host/vm) should that command be run?).

I also found (for a previous version of oVirt) notes about modifying the 
Postgres DB manual to resolve a similar issue. While I am more than comfortable 
doing this (I've been an SQL DBA for well over 20 years) this seems like asking 
for trouble - at least until I hear back from the oVirt Devs that this is OK to 
do - and of course, I'll need the relevant commands / locations / 
authorisations to get into the DB.

Questions
-
- 

[ovirt-users] oVirt & (Ceph) iSCSI

2022-09-21 Thread Matthew J Black
Hi Everybody (Hi Dr. Nick),

So, next question in my on-going saga: *somewhere* in the documentation I read 
that when using oVirt with multiple iSCSI paths (in my case, multiple Ceph 
iSCSI Gateways) we need to set up DM Multipath.

My question is: Is this still relevant information when using oVirt v4.5.2?

Relevant link referred to by the oVirt Documentation:
- 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/dm_multipath/

Cheers

Dulux-Oz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFFQGNVKU2VSWGGYBKYAQBRU2NJTUYZ3/


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat,
3 is the bare minimum, but yes, it works well, as I said before. But you still 
have to decide weather you want to have more resiliency for ovirt, and standard 
NFS is not helping much.
If you plan to run your cinder or openstack all in one box as VM in ovirt as 
well you should consider moving from standard NFS to something else, like 
gluster.
Cheers,

  Alessandro

> Il giorno 18 dic 2016, alle ore 18:56, rajatjpatel  ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 9:31 PM, Alessandro De Salvo 
>>  wrote:
>> Alessandro
> 
> ​Thank you Alessandro, for all your support if I add one more ovirt-hyp to my 
> setup with same config as h/w will it work for ceph.
> 
> Regards
> Rajat​
> 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
On Sun, Dec 18, 2016 at 9:31 PM, Alessandro De Salvo <
alessandro.desa...@roma1.infn.it> wrote:

> Alessandro


​Thank you Alessandro, for all your support if I add one more ovirt-hyp to
my setup with same config as h/w will it work for ceph.

Regards
Rajat​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat,
OK, I see. Well, so just consider that ceph will not work at best in your 
setup, unless you add at least a physical machine. Same is true for ovirt if 
you are only using native NFS, as you loose a real HA.
Having said this, of course you choose what's best for your site or affordable, 
but your setup looks quite fragile to me. Happy to help more if you need.
Regards,

   Alessandro

> Il giorno 18 dic 2016, alle ore 18:22, rajatjpatel  ha 
> scritto:
> 
> Alessandro,
> 
> Right now I dont have cinder running in my setup in case if ceph don't work 
> then I have get one vm running open stack all in one and have all these disk 
> connect my open stack using cinder I can present storage to my ovirt.
> 
> At the same time I not getting case study for the same.
> 
> Regards
> Rajat
> 
> Hi
> 
> 
> Regards,
> Rajat Patel
> 
> http://studyhat.blogspot.com
> FIRST THEY IGNORE YOU...
> THEN THEY LAUGH AT YOU...
> THEN THEY FIGHT YOU...
> THEN YOU WIN...
> 
> 
>> On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo 
>>  wrote:
>> Hi,
>> oh, so you have only 2 physical servers? I've understood they were 3! Well, 
>> in this case ceph would not work very well, too few resources and 
>> redundancy. You could try a replica 2, but it's not safe. Having a replica 3 
>> could be forced, but you would end up with a server with 2 replicas, which 
>> is dangerous/useless.
>> Okay, so you use nfs as storage domain, but in your setup the HA is not 
>> guaranteed: if a physical machine goes down and it's the one where the 
>> storage domain resides you are lost. Why not using gluster instead of nfs 
>> for the ovirt disks? You can still reserve a small gluster space for the 
>> non-ceph machines (for example a cinder VM) and ceph for the rest. Where do 
>> you have your cinder running?
>> Cheers,
>> 
>> Alessandro
>> 
>>> Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel  
>>> ha scritto:
>>> 
>>> Hi Alessandro,
>>> 
>>> Right now I have 2 physical server where I have host ovirt these are HP 
>>> proliant dl 380  each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. 
>>> So right now I have use only one disk which 500GB of SAS for my ovirt to 
>>> run on both server. rest are not in use. At present I am using NFS which 
>>> coming from mapper to ovirt as storage, go forward we like to use all these 
>>> disk as  hyper-converged for ovirt. RH I could see there is KB for using 
>>> gluster. But we are looking for Ceph bcoz best pref romance and scale.
>>> 
>>> 
>>> Regards
>>> Rajat
>>> 
>>> Hi
>>> 
>>> 
>>> Regards,
>>> Rajat Patel
>>> 
>>> http://studyhat.blogspot.com
>>> FIRST THEY IGNORE YOU...
>>> THEN THEY LAUGH AT YOU...
>>> THEN THEY FIGHT YOU...
>>> THEN YOU WIN...
>>> 
>>> 
 On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo 
  wrote:
 Hi Rajat,
 sorry but I do not really have a clear picture of your actual setup, can 
 you please explain a bit more?
 In particular:
 
 1) what to you mean by using 4TB for ovirt? In which machines and how do 
 you make it available to ovirt?
 
 2) how do you plan to use ceph with ovirt?
 
 I guess we can give more help if you clarify those points.
 Thanks,
 
Alessandro 
 
> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel 
>  ha scritto:
> 
> Great, thanks! Alessandro ++ Yaniv ++ 
> 
> What I want to use around 4 TB of SAS disk for my Ovirt (which going to 
> be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
> 
> I had done so much duckduckgo for all these solution and use lot of 
> reference from ovirt.org & access.redhat.com for setting up a Ovirt 
> engine and hyp.
> 
> We dont mind having more guest running and creating ceph block storage 
> and which will be presented to ovirt as storage. Gluster is not is use 
> right now bcoz we have DB will be running on guest.
> 
> Regard
> Rajat 
> 
>> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo 
>>  wrote:
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it 
>> working, unless you want to have just a replica-2 mode, which is not 
>> safe.
>> It's not true that ceph is not easy to configure, you might use very 
>> easily ceph-deploy, have puppet configuring it or even run it in 
>> containers. Using docker is in fact the easiest solution, it really 
>> requires 10 minutes to make a cluster up. I've tried it both with jewel 
>> (official containers) and kraken (custom containers), and it works 
>> pretty well.
>> The real problem is not creating and configuring a ceph cluster, but 
>> using it from ovirt, as it requires cinder, i.e. a minimal setup of 
>> openstack. We have it and it's 

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Alessandro,

Right now I dont have cinder running in my setup in case if ceph don't work
then I have get one vm running open stack all in one and have all these
disk connect my open stack using cinder I can present storage to my ovirt.

At the same time I not getting case study for the same.

Regards
Rajat

Hi


Regards,
Rajat Patel

http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...


On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo <
alessandro.desa...@roma1.infn.it> wrote:

> Hi,
> oh, so you have only 2 physical servers? I've understood they were 3!
> Well, in this case ceph would not work very well, too few resources and
> redundancy. You could try a replica 2, but it's not safe. Having a replica
> 3 could be forced, but you would end up with a server with 2 replicas,
> which is dangerous/useless.
> Okay, so you use nfs as storage domain, but in your setup the HA is not
> guaranteed: if a physical machine goes down and it's the one where the
> storage domain resides you are lost. Why not using gluster instead of nfs
> for the ovirt disks? You can still reserve a small gluster space for the
> non-ceph machines (for example a cinder VM) and ceph for the rest. Where do
> you have your cinder running?
> Cheers,
>
> Alessandro
>
> Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel 
> ha scritto:
>
> Hi Alessandro,
>
> Right now I have 2 physical server where I have host ovirt these are HP
> proliant dl 380  each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD.
> So right now I have use only one disk which 500GB of SAS for my ovirt to
> run on both server. rest are not in use. At present I am using NFS which
> coming from mapper to ovirt as storage, go forward we like to use all these
> disk as  hyper-converged for ovirt. RH I could see there is KB for using
> gluster. But we are looking for Ceph bcoz best pref romance and scale.
>
> 
> Regards
> Rajat
>
> Hi
>
>
> Regards,
> Rajat Patel
>
> http://studyhat.blogspot.com
> FIRST THEY IGNORE YOU...
> THEN THEY LAUGH AT YOU...
> THEN THEY FIGHT YOU...
> THEN YOU WIN...
>
>
> On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <
> alessandro.desa...@roma1.infn.it> wrote:
>
>> Hi Rajat,
>> sorry but I do not really have a clear picture of your actual setup, can
>> you please explain a bit more?
>> In particular:
>>
>> 1) what to you mean by using 4TB for ovirt? In which machines and how do
>> you make it available to ovirt?
>>
>> 2) how do you plan to use ceph with ovirt?
>>
>> I guess we can give more help if you clarify those points.
>> Thanks,
>>
>>Alessandro
>>
>> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel 
>> ha scritto:
>>
>> Great, thanks! Alessandro ++ Yaniv ++
>>
>> What I want to use around 4 TB of SAS disk for my Ovirt (which going to
>> be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>>
>> I had done so much duckduckgo for all these solution and use lot of
>> reference from ovirt.org & access.redhat.com for setting up a Ovirt
>> engine and hyp.
>>
>> We dont mind having more guest running and creating ceph block storage
>> and which will be presented to ovirt as storage. Gluster is not is use
>> right now bcoz we have DB will be running on guest.
>>
>> Regard
>> Rajat
>>
>> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <
>> alessandro.desa...@roma1.infn.it> wrote:
>>
>>> Hi,
>>> having a 3-node ceph cluster is the bare minimum you can have to make it
>>> working, unless you want to have just a replica-2 mode, which is not safe.
>>> It's not true that ceph is not easy to configure, you might use very
>>> easily ceph-deploy, have puppet configuring it or even run it in
>>> containers. Using docker is in fact the easiest solution, it really
>>> requires 10 minutes to make a cluster up. I've tried it both with jewel
>>> (official containers) and kraken (custom containers), and it works pretty
>>> well.
>>> The real problem is not creating and configuring a ceph cluster, but
>>> using it from ovirt, as it requires cinder, i.e. a minimal setup of
>>> openstack. We have it and it's working pretty well, but it requires some
>>> work. For your reference we have cinder running on an ovirt VM using
>>> gluster.
>>> Cheers,
>>>
>>>Alessandro
>>>
>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha
>>> scritto:
>>>
>>>
>>>
>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel 
>>> wrote:
>>>
>>> ​Dear Team,
>>>
>>> We are using Ovirt 4.0 for POC what we are doing I want to check with
>>> all Guru's Ovirt.
>>>
>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
>>> SSD.
>>>
>>> Waht we are done we have install ovirt hyp on these h/w and we have
>>> physical server where we are running our manager for ovirt. For ovirt hyp
>>> we are using only one 500GB of one HDD rest we have kept for ceph, so we
>>> have 3 node as guest 

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi,
oh, so you have only 2 physical servers? I've understood they were 3! Well, in 
this case ceph would not work very well, too few resources and redundancy. You 
could try a replica 2, but it's not safe. Having a replica 3 could be forced, 
but you would end up with a server with 2 replicas, which is dangerous/useless.
Okay, so you use nfs as storage domain, but in your setup the HA is not 
guaranteed: if a physical machine goes down and it's the one where the storage 
domain resides you are lost. Why not using gluster instead of nfs for the ovirt 
disks? You can still reserve a small gluster space for the non-ceph machines 
(for example a cinder VM) and ceph for the rest. Where do you have your cinder 
running?
Cheers,

Alessandro

> Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel  ha 
> scritto:
> 
> Hi Alessandro,
> 
> Right now I have 2 physical server where I have host ovirt these are HP 
> proliant dl 380  each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. 
> So right now I have use only one disk which 500GB of SAS for my ovirt to run 
> on both server. rest are not in use. At present I am using NFS which coming 
> from mapper to ovirt as storage, go forward we like to use all these disk as  
> hyper-converged for ovirt. RH I could see there is KB for using gluster. But 
> we are looking for Ceph bcoz best pref romance and scale.
> 
> 
> Regards
> Rajat
> 
> Hi
> 
> 
> Regards,
> Rajat Patel
> 
> http://studyhat.blogspot.com
> FIRST THEY IGNORE YOU...
> THEN THEY LAUGH AT YOU...
> THEN THEY FIGHT YOU...
> THEN YOU WIN...
> 
> 
>> On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo 
>>  wrote:
>> Hi Rajat,
>> sorry but I do not really have a clear picture of your actual setup, can you 
>> please explain a bit more?
>> In particular:
>> 
>> 1) what to you mean by using 4TB for ovirt? In which machines and how do you 
>> make it available to ovirt?
>> 
>> 2) how do you plan to use ceph with ovirt?
>> 
>> I guess we can give more help if you clarify those points.
>> Thanks,
>> 
>>Alessandro 
>> 
>>> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel  
>>> ha scritto:
>>> 
>>> Great, thanks! Alessandro ++ Yaniv ++ 
>>> 
>>> What I want to use around 4 TB of SAS disk for my Ovirt (which going to be 
>>> RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>>> 
>>> I had done so much duckduckgo for all these solution and use lot of 
>>> reference from ovirt.org & access.redhat.com for setting up a Ovirt engine 
>>> and hyp.
>>> 
>>> We dont mind having more guest running and creating ceph block storage and 
>>> which will be presented to ovirt as storage. Gluster is not is use right 
>>> now bcoz we have DB will be running on guest.
>>> 
>>> Regard
>>> Rajat 
>>> 
 On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo 
  wrote:
 Hi,
 having a 3-node ceph cluster is the bare minimum you can have to make it 
 working, unless you want to have just a replica-2 mode, which is not safe.
 It's not true that ceph is not easy to configure, you might use very 
 easily ceph-deploy, have puppet configuring it or even run it in 
 containers. Using docker is in fact the easiest solution, it really 
 requires 10 minutes to make a cluster up. I've tried it both with jewel 
 (official containers) and kraken (custom containers), and it works pretty 
 well.
 The real problem is not creating and configuring a ceph cluster, but using 
 it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. 
 We have it and it's working pretty well, but it requires some work. For 
 your reference we have cinder running on an ovirt VM using gluster.
 Cheers,
 
Alessandro 
 
> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha 
> scritto:
> 
> 
> 
> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  
> wrote:
> ​Dear Team,
> 
> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
> Guru's Ovirt.
> 
> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB 
> SSD.
> 
> Waht we are done we have install ovirt hyp on these h/w and we have 
> physical server where we are running our manager for ovirt. For ovirt hyp 
> we are using only one 500GB of one HDD rest we have kept for ceph, so we 
> have 3 node as guest running on ovirt and for ceph. My question you all 
> is what I am doing is right or wrong.
> 
> I think Ceph requires a lot more resources than above. It's also a bit 
> more challenging to configure. I would highly recommend a 3-node cluster 
> with Gluster.
> Y.
>  
> 
> Regards
> Rajat​
> 
> 
> ___
> Users mailing list
> Users@ovirt.org

Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Hi Alessandro,

Right now I have 2 physical server where I have host ovirt these are HP
proliant dl 380  each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD.
So right now I have use only one disk which 500GB of SAS for my ovirt to
run on both server. rest are not in use. At present I am using NFS which
coming from mapper to ovirt as storage, go forward we like to use all these
disk as  hyper-converged for ovirt. RH I could see there is KB for using
gluster. But we are looking for Ceph bcoz best pref romance and scale.

[image: Inline image 1]
Regards
Rajat

Hi


Regards,
Rajat Patel

http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...


On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <
alessandro.desa...@roma1.infn.it> wrote:

> Hi Rajat,
> sorry but I do not really have a clear picture of your actual setup, can
> you please explain a bit more?
> In particular:
>
> 1) what to you mean by using 4TB for ovirt? In which machines and how do
> you make it available to ovirt?
>
> 2) how do you plan to use ceph with ovirt?
>
> I guess we can give more help if you clarify those points.
> Thanks,
>
>Alessandro
>
> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel 
> ha scritto:
>
> Great, thanks! Alessandro ++ Yaniv ++
>
> What I want to use around 4 TB of SAS disk for my Ovirt (which going to be
> RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>
> I had done so much duckduckgo for all these solution and use lot of
> reference from ovirt.org & access.redhat.com for setting up a Ovirt
> engine and hyp.
>
> We dont mind having more guest running and creating ceph block storage and
> which will be presented to ovirt as storage. Gluster is not is use right
> now bcoz we have DB will be running on guest.
>
> Regard
> Rajat
>
> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <
> alessandro.desa...@roma1.infn.it> wrote:
>
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it
>> working, unless you want to have just a replica-2 mode, which is not safe.
>> It's not true that ceph is not easy to configure, you might use very
>> easily ceph-deploy, have puppet configuring it or even run it in
>> containers. Using docker is in fact the easiest solution, it really
>> requires 10 minutes to make a cluster up. I've tried it both with jewel
>> (official containers) and kraken (custom containers), and it works pretty
>> well.
>> The real problem is not creating and configuring a ceph cluster, but
>> using it from ovirt, as it requires cinder, i.e. a minimal setup of
>> openstack. We have it and it's working pretty well, but it requires some
>> work. For your reference we have cinder running on an ovirt VM using
>> gluster.
>> Cheers,
>>
>>Alessandro
>>
>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha
>> scritto:
>>
>>
>>
>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel 
>> wrote:
>>
>> ​Dear Team,
>>
>> We are using Ovirt 4.0 for POC what we are doing I want to check with all
>> Guru's Ovirt.
>>
>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
>> SSD.
>>
>> Waht we are done we have install ovirt hyp on these h/w and we have
>> physical server where we are running our manager for ovirt. For ovirt hyp
>> we are using only one 500GB of one HDD rest we have kept for ceph, so we
>> have 3 node as guest running on ovirt and for ceph. My question you all is
>> what I am doing is right or wrong.
>>
>>
>> I think Ceph requires a lot more resources than above. It's also a bit
>> more challenging to configure. I would highly recommend a 3-node cluster
>> with Gluster.
>> Y.
>>
>>
>>
>> Regards
>> Rajat​
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>> --
>
> Sent from my Cell Phone - excuse the typos & auto incorrect
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Yaniv,

> Il giorno 18 dic 2016, alle ore 17:37, Yaniv Kaul  ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo 
>>  wrote:
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it 
>> working, unless you want to have just a replica-2 mode, which is not safe.
> 
> How well does it perform?

One if the ceph clusters we use had exactly this setup: 3 DELL R630 (ceph 
jewel), 6 1TB NL-SAS disks so 3 mons, 6 osds. We bound the cluster network to a 
dedicated interface, 1Gbps. I can say it works pretty well, the performance 
reaches up to 100MB/s per rbd device, which is the expected maximum for the 
network connection. Resiliency is also pretty good, we can loose 2 osds (I.e. a 
full machine) without impacting on the performance.

>  
>> It's not true that ceph is not easy to configure, you might use very easily 
>> ceph-deploy, have puppet configuring it or even run it in containers. Using 
>> docker is in fact the easiest solution, it really requires 10 minutes to 
>> make a cluster up. I've tried it both with jewel (official containers) and 
>> kraken (custom containers), and it works pretty well.
> 
> This could be a great blog post in ovirt.org site - care to write something 
> describing the configuration and setup?

Oh sure, if it may be of general interest I'll be glad to. How can I do it? :-)
Cheers,

   Alessandro 

> Y.
>  
>> The real problem is not creating and configuring a ceph cluster, but using 
>> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We 
>> have it and it's working pretty well, but it requires some work. For your 
>> reference we have cinder running on an ovirt VM using gluster.
>> Cheers,
>> 
>>Alessandro 
>> 
>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha 
>>> scritto:
>>> 
>>> 
>>> 
 On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  wrote:
 ​Dear Team,
 
 We are using Ovirt 4.0 for POC what we are doing I want to check with all 
 Guru's Ovirt.
 
 We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB 
 SSD.
 
 Waht we are done we have install ovirt hyp on these h/w and we have 
 physical server where we are running our manager for ovirt. For ovirt hyp 
 we are using only one 500GB of one HDD rest we have kept for ceph, so we 
 have 3 node as guest running on ovirt and for ceph. My question you all is 
 what I am doing is right or wrong.
>>> 
>>> I think Ceph requires a lot more resources than above. It's also a bit more 
>>> challenging to configure. I would highly recommend a 3-node cluster with 
>>> Gluster.
>>> Y.
>>>  
 
 Regards
 Rajat​
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi Rajat,
sorry but I do not really have a clear picture of your actual setup, can you 
please explain a bit more?
In particular:

1) what to you mean by using 4TB for ovirt? In which machines and how do you 
make it available to ovirt?

2) how do you plan to use ceph with ovirt?

I guess we can give more help if you clarify those points.
Thanks,

   Alessandro 

> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel  ha 
> scritto:
> 
> Great, thanks! Alessandro ++ Yaniv ++ 
> 
> What I want to use around 4 TB of SAS disk for my Ovirt (which going to be 
> RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
> 
> I had done so much duckduckgo for all these solution and use lot of reference 
> from ovirt.org & access.redhat.com for setting up a Ovirt engine and hyp.
> 
> We dont mind having more guest running and creating ceph block storage and 
> which will be presented to ovirt as storage. Gluster is not is use right now 
> bcoz we have DB will be running on guest.
> 
> Regard
> Rajat 
> 
>> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo 
>>  wrote:
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it 
>> working, unless you want to have just a replica-2 mode, which is not safe.
>> It's not true that ceph is not easy to configure, you might use very easily 
>> ceph-deploy, have puppet configuring it or even run it in containers. Using 
>> docker is in fact the easiest solution, it really requires 10 minutes to 
>> make a cluster up. I've tried it both with jewel (official containers) and 
>> kraken (custom containers), and it works pretty well.
>> The real problem is not creating and configuring a ceph cluster, but using 
>> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We 
>> have it and it's working pretty well, but it requires some work. For your 
>> reference we have cinder running on an ovirt VM using gluster.
>> Cheers,
>> 
>>Alessandro 
>> 
>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha 
>>> scritto:
>>> 
>>> 
>>> 
>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  wrote:
>>> ​Dear Team,
>>> 
>>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>>> Guru's Ovirt.
>>> 
>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
>>> 
>>> Waht we are done we have install ovirt hyp on these h/w and we have 
>>> physical server where we are running our manager for ovirt. For ovirt hyp 
>>> we are using only one 500GB of one HDD rest we have kept for ceph, so we 
>>> have 3 node as guest running on ovirt and for ceph. My question you all is 
>>> what I am doing is right or wrong.
>>> 
>>> I think Ceph requires a lot more resources than above. It's also a bit more 
>>> challenging to configure. I would highly recommend a 3-node cluster with 
>>> Gluster.
>>> Y.
>>>  
>>> 
>>> Regards
>>> Rajat​
>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>> 
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
> -- 
> Sent from my Cell Phone - excuse the typos & auto incorrect
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Yaniv,

If I am not wrong your referencing to this
https://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/

But only issue here right now this is not add from RH officially, after
finish this we will going for RH product.

Regards
Rajat

Hi


Regards,
Rajat Patel

http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...


On Sun, Dec 18, 2016 at 8:37 PM, Yaniv Kaul  wrote:

>
>
> On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo <
> alessandro.desa...@roma1.infn.it> wrote:
>
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it
>> working, unless you want to have just a replica-2 mode, which is not safe.
>>
>
> How well does it perform?
>
>
>> It's not true that ceph is not easy to configure, you might use very
>> easily ceph-deploy, have puppet configuring it or even run it in
>> containers. Using docker is in fact the easiest solution, it really
>> requires 10 minutes to make a cluster up. I've tried it both with jewel
>> (official containers) and kraken (custom containers), and it works pretty
>> well.
>>
>
> This could be a great blog post in ovirt.org site - care to write
> something describing the configuration and setup?
> Y.
>
>
>> The real problem is not creating and configuring a ceph cluster, but
>> using it from ovirt, as it requires cinder, i.e. a minimal setup of
>> openstack. We have it and it's working pretty well, but it requires some
>> work. For your reference we have cinder running on an ovirt VM using
>> gluster.
>> Cheers,
>>
>>Alessandro
>>
>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha
>> scritto:
>>
>>
>>
>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel 
>> wrote:
>>
>>> ​Dear Team,
>>>
>>> We are using Ovirt 4.0 for POC what we are doing I want to check with
>>> all Guru's Ovirt.
>>>
>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
>>> SSD.
>>>
>>> Waht we are done we have install ovirt hyp on these h/w and we have
>>> physical server where we are running our manager for ovirt. For ovirt hyp
>>> we are using only one 500GB of one HDD rest we have kept for ceph, so we
>>> have 3 node as guest running on ovirt and for ceph. My question you all is
>>> what I am doing is right or wrong.
>>>
>>
>> I think Ceph requires a lot more resources than above. It's also a bit
>> more challenging to configure. I would highly recommend a 3-node cluster
>> with Gluster.
>> Y.
>>
>>
>>>
>>> Regards
>>> Rajat​
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Yaniv Kaul
On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo <
alessandro.desa...@roma1.infn.it> wrote:

> Hi,
> having a 3-node ceph cluster is the bare minimum you can have to make it
> working, unless you want to have just a replica-2 mode, which is not safe.
>

How well does it perform?


> It's not true that ceph is not easy to configure, you might use very
> easily ceph-deploy, have puppet configuring it or even run it in
> containers. Using docker is in fact the easiest solution, it really
> requires 10 minutes to make a cluster up. I've tried it both with jewel
> (official containers) and kraken (custom containers), and it works pretty
> well.
>

This could be a great blog post in ovirt.org site - care to write something
describing the configuration and setup?
Y.


> The real problem is not creating and configuring a ceph cluster, but using
> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We
> have it and it's working pretty well, but it requires some work. For your
> reference we have cinder running on an ovirt VM using gluster.
> Cheers,
>
>Alessandro
>
> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha
> scritto:
>
>
>
> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel 
> wrote:
>
>> ​Dear Team,
>>
>> We are using Ovirt 4.0 for POC what we are doing I want to check with all
>> Guru's Ovirt.
>>
>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
>> SSD.
>>
>> Waht we are done we have install ovirt hyp on these h/w and we have
>> physical server where we are running our manager for ovirt. For ovirt hyp
>> we are using only one 500GB of one HDD rest we have kept for ceph, so we
>> have 3 node as guest running on ovirt and for ceph. My question you all is
>> what I am doing is right or wrong.
>>
>
> I think Ceph requires a lot more resources than above. It's also a bit
> more challenging to configure. I would highly recommend a 3-node cluster
> with Gluster.
> Y.
>
>
>>
>> Regards
>> Rajat​
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
In fact after reading lot of KB I was thing to run one all in one open
stack and use cinder as block storage.

Ragards
Rajat

On Sun, Dec 18, 2016 at 8:33 PM rajatjpatel  wrote:

> Great, thanks! Alessandro ++ Yaniv ++
>
> What I want to use around 4 TB of SAS disk for my Ovirt (which going to be
> RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>
> I had done so much duckduckgo for all these solution and use lot of
> reference from ovirt.org & access.redhat.com for setting up a Ovirt
> engine and hyp.
>
> We dont mind having more guest running and creating ceph block storage and
> which will be presented to ovirt as storage. Gluster is not is use right
> now bcoz we have DB will be running on guest.
>
> Regard
> Rajat
>
> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <
> alessandro.desa...@roma1.infn.it> wrote:
>
> Hi,
> having a 3-node ceph cluster is the bare minimum you can have to make it
> working, unless you want to have just a replica-2 mode, which is not safe.
> It's not true that ceph is not easy to configure, you might use very
> easily ceph-deploy, have puppet configuring it or even run it in
> containers. Using docker is in fact the easiest solution, it really
> requires 10 minutes to make a cluster up. I've tried it both with jewel
> (official containers) and kraken (custom containers), and it works pretty
> well.
> The real problem is not creating and configuring a ceph cluster, but using
> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We
> have it and it's working pretty well, but it requires some work. For your
> reference we have cinder running on an ovirt VM using gluster.
> Cheers,
>
>Alessandro
>
> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha
> scritto:
>
>
>
> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel 
> wrote:
>
> ​Dear Team,
>
> We are using Ovirt 4.0 for POC what we are doing I want to check with all
> Guru's Ovirt.
>
> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
> SSD.
>
> Waht we are done we have install ovirt hyp on these h/w and we have
> physical server where we are running our manager for ovirt. For ovirt hyp
> we are using only one 500GB of one HDD rest we have kept for ceph, so we
> have 3 node as guest running on ovirt and for ceph. My question you all is
> what I am doing is right or wrong.
>
>
> I think Ceph requires a lot more resources than above. It's also a bit
> more challenging to configure. I would highly recommend a 3-node cluster
> with Gluster.
> Y.
>
>
>
> Regards
> Rajat​
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> --
>
> Sent from my Cell Phone - excuse the typos & auto incorrect
>
-- 

Sent from my Cell Phone - excuse the typos & auto incorrect
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
Great, thanks! Alessandro ++ Yaniv ++

What I want to use around 4 TB of SAS disk for my Ovirt (which going to be
RHV4.0.5 once POC get 100% successful, in fact all product will be RH )

I had done so much duckduckgo for all these solution and use lot of
reference from ovirt.org & access.redhat.com for setting up a Ovirt engine
and hyp.

We dont mind having more guest running and creating ceph block storage and
which will be presented to ovirt as storage. Gluster is not is use right
now bcoz we have DB will be running on guest.

Regard
Rajat

On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <
alessandro.desa...@roma1.infn.it> wrote:

> Hi,
> having a 3-node ceph cluster is the bare minimum you can have to make it
> working, unless you want to have just a replica-2 mode, which is not safe.
> It's not true that ceph is not easy to configure, you might use very
> easily ceph-deploy, have puppet configuring it or even run it in
> containers. Using docker is in fact the easiest solution, it really
> requires 10 minutes to make a cluster up. I've tried it both with jewel
> (official containers) and kraken (custom containers), and it works pretty
> well.
> The real problem is not creating and configuring a ceph cluster, but using
> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We
> have it and it's working pretty well, but it requires some work. For your
> reference we have cinder running on an ovirt VM using gluster.
> Cheers,
>
>Alessandro
>
> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha
> scritto:
>
>
>
> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel 
> wrote:
>
> ​Dear Team,
>
> We are using Ovirt 4.0 for POC what we are doing I want to check with all
> Guru's Ovirt.
>
> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
> SSD.
>
> Waht we are done we have install ovirt hyp on these h/w and we have
> physical server where we are running our manager for ovirt. For ovirt hyp
> we are using only one 500GB of one HDD rest we have kept for ceph, so we
> have 3 node as guest running on ovirt and for ceph. My question you all is
> what I am doing is right or wrong.
>
>
> I think Ceph requires a lot more resources than above. It's also a bit
> more challenging to configure. I would highly recommend a 3-node cluster
> with Gluster.
> Y.
>
>
>
> Regards
> Rajat​
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> --

Sent from my Cell Phone - excuse the typos & auto incorrect
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi,
sorry, forgot to mention you may have both gluster and ceph on the same 
machines, as long as you have enough disk space.
Cheers,

   Alessandro 

> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  wrote:
>> ​Dear Team,
>> 
>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>> Guru's Ovirt.
>> 
>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
>> 
>> Waht we are done we have install ovirt hyp on these h/w and we have physical 
>> server where we are running our manager for ovirt. For ovirt hyp we are 
>> using only one 500GB of one HDD rest we have kept for ceph, so we have 3 
>> node as guest running on ovirt and for ceph. My question you all is what I 
>> am doing is right or wrong.
> 
> I think Ceph requires a lot more resources than above. It's also a bit more 
> challenging to configure. I would highly recommend a 3-node cluster with 
> Gluster.
> Y.
>  
>> 
>> Regards
>> Rajat​
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Alessandro De Salvo
Hi,
having a 3-node ceph cluster is the bare minimum you can have to make it 
working, unless you want to have just a replica-2 mode, which is not safe.
It's not true that ceph is not easy to configure, you might use very easily 
ceph-deploy, have puppet configuring it or even run it in containers. Using 
docker is in fact the easiest solution, it really requires 10 minutes to make a 
cluster up. I've tried it both with jewel (official containers) and kraken 
(custom containers), and it works pretty well.
The real problem is not creating and configuring a ceph cluster, but using it 
from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have 
it and it's working pretty well, but it requires some work. For your reference 
we have cinder running on an ovirt VM using gluster.
Cheers,

   Alessandro 

> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul  ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  wrote:
>> ​Dear Team,
>> 
>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>> Guru's Ovirt.
>> 
>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.
>> 
>> Waht we are done we have install ovirt hyp on these h/w and we have physical 
>> server where we are running our manager for ovirt. For ovirt hyp we are 
>> using only one 500GB of one HDD rest we have kept for ceph, so we have 3 
>> node as guest running on ovirt and for ceph. My question you all is what I 
>> am doing is right or wrong.
> 
> I think Ceph requires a lot more resources than above. It's also a bit more 
> challenging to configure. I would highly recommend a 3-node cluster with 
> Gluster.
> Y.
>  
>> 
>> Regards
>> Rajat​
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt & Ceph

2016-12-18 Thread Yaniv Kaul
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel  wrote:

> ​Dear Team,
>
> We are using Ovirt 4.0 for POC what we are doing I want to check with all
> Guru's Ovirt.
>
> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB
> SSD.
>
> Waht we are done we have install ovirt hyp on these h/w and we have
> physical server where we are running our manager for ovirt. For ovirt hyp
> we are using only one 500GB of one HDD rest we have kept for ceph, so we
> have 3 node as guest running on ovirt and for ceph. My question you all is
> what I am doing is right or wrong.
>

I think Ceph requires a lot more resources than above. It's also a bit more
challenging to configure. I would highly recommend a 3-node cluster with
Gluster.
Y.


>
> Regards
> Rajat​
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt & Ceph

2016-12-18 Thread rajatjpatel
​Dear Team,

We are using Ovirt 4.0 for POC what we are doing I want to check with all
Guru's Ovirt.

We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.

Waht we are done we have install ovirt hyp on these h/w and we have
physical server where we are running our manager for ovirt. For ovirt hyp
we are using only one 500GB of one HDD rest we have kept for ceph, so we
have 3 node as guest running on ovirt and for ceph. My question you all is
what I am doing is right or wrong.

Regards
Rajat​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users