Re: [ovirt-users] Backup solutions for Ovirt

2014-07-18 Thread saurabh

It would be great as any of you provide any pointer on this issue.


Regards,


On Tuesday 15 July 2014 02:35 PM, saurabh wrote:

Hi All,

I am using Ovirt for my QA environment almost 15 virtual servers, and 
it is working as expected. The only thing I am worried about is the 
backup solutions in case of failure or system crash.


I tried looking into the LVM structure that is created by Ovirt but it 
seems a bit complex.
Please let me know the proper solution (if there is any) for backing 
up my environment in case of failure.



Best Regards,



--
Saurabh Kumar
System Admin
Red Hat Certified Engineer (RHCE)
Red Hat Certified System Administrator (RHCSA)
Red Hat Certified Virtualization Administrator (RHCVA)
Red Hat Certified Server Hardening Expert (RH-413)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ceph support

2014-07-18 Thread Johan Kooijman
Hi all,

Does somebody have *any* idea on when oVirt will start to support Ceph
through libvirtd? I could mount an RBD volume onto a server and then expert
it as NFS, but that kills my I/O throughput quite severely (write IOPS go
down by 84%).

Ceph is the way to go for storage needs if you ask me, but I'd rather not
move away from ovirt because there's no support.

-- 
Met vriendelijke groeten / With kind regards,
Johan Kooijman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Can we debug some truths/myths/facts about hosted-engine and gluster?

2014-07-18 Thread Andrew Lau
Hi all,

As most of you have got hints from previous messages, hosted engine won't
work on gluster . A quote from BZ1097639

Using hosted engine with Gluster backed storage is currently something we
really warn against.


I think this bug should be closed or re-targeted at documentation,
because there is nothing we can do here. Hosted engine assumes that
all writes are atomic and (immediately) available for all hosts in the
cluster. Gluster violates those assumptions.

​

​Until the documentation gets updated, I hope this serves as a useful
notice at least to save people some of the headaches I hit like
hosted-engine starting up multiple VMs because of above issue.
​

Now my question, does this theory prevent a scenario of perhaps something
like a gluster replicated volume being mounted as a glusterfs filesystem
and then re-exported as the native kernel NFS share for the hosted-engine
to consume? It could then be possible to chuck ctdb in there to provide a
last resort failover solution. I have tried myself and suggested it to two
people who are running a similar setup. Now using the native kernel NFS
server for hosted-engine and they haven't reported as many issues. Curious,
could anyone validate my theory on this?

Thanks,
Andrew
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can we debug some truths/myths/facts about hosted-engine and gluster?

2014-07-18 Thread Vijay Bellur

[Adding gluster-devel]

On 07/18/2014 05:20 PM, Andrew Lau wrote:

Hi all,

As most of you have got hints from previous messages, hosted engine
won't work on gluster . A quote from BZ1097639

Using hosted engine with Gluster backed storage is currently something
we really warn against.


I think this bug should be closed or re-targeted at documentation, because 
there is nothing we can do here. Hosted engine assumes that all writes are 
atomic and (immediately) available for all hosts in the cluster. Gluster 
violates those assumptions.
​
I tried going through BZ1097639 but could not find much detail with 
respect to gluster there.


A few questions around the problem:

1. Can somebody please explain in detail the scenario that causes the 
problem?


2. Is hosted engine performing synchronous writes to ensure that writes 
are durable?


Also, if there is any documentation that details the hosted engine 
architecture that would help in enhancing our understanding of its 
interactions with gluster.



​

Now my question, does this theory prevent a scenario of perhaps
something like a gluster replicated volume being mounted as a glusterfs
filesystem and then re-exported as the native kernel NFS share for the
hosted-engine to consume? It could then be possible to chuck ctdb in
there to provide a last resort failover solution. I have tried myself
and suggested it to two people who are running a similar setup. Now
using the native kernel NFS server for hosted-engine and they haven't
reported as many issues. Curious, could anyone validate my theory on this?



If we obtain more details on the use case and obtain gluster logs from 
the failed scenarios, we should be able to understand the problem 
better. That could be the first step in validating your theory or 
evolving further recommendations :).


Thanks,
Vijay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Relationship bw storage domain uuid/images/children and VM's

2014-07-18 Thread Yair Zaslavsky


- Original Message -
 From: Steve Dainard sdain...@miovision.com
 To: users users@ovirt.org
 Sent: Thursday, July 17, 2014 7:51:31 PM
 Subject: [ovirt-users] Relationship bw storage domain uuid/images/children
 and VM's
 
 Hello,
 
 I'd like to get an understanding of the relationship between VM's using a
 storage domain, and the child directories listed under .../storage domain
 name/storage domain uuid/images/.
 
 Running through some backup scenarios I'm noticing a significant difference
 between the number of provisioned VM's using a storage domain (21) +
 templates (6) versus the number of child directories under images/ (107).

Can you please elaborate (if possible) on the number of images per VM that 
you're having in your setup?

 
 Running RHEV 3.4 trial.
 
 Thanks,
 Steve
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can we debug some truths/myths/facts about hosted-engine and gluster?

2014-07-18 Thread Andrew Lau
​​

On Fri, Jul 18, 2014 at 10:06 PM, Vijay Bellur vbel...@redhat.com wrote:

 [Adding gluster-devel]


 On 07/18/2014 05:20 PM, Andrew Lau wrote:

 Hi all,

 As most of you have got hints from previous messages, hosted engine
 won't work on gluster . A quote from BZ1097639

 Using hosted engine with Gluster backed storage is currently something
 we really warn against.


 I think this bug should be closed or re-targeted at documentation,
 because there is nothing we can do here. Hosted engine assumes that all
 writes are atomic and (immediately) available for all hosts in the cluster.
 Gluster violates those assumptions.
 ​

 I tried going through BZ1097639 but could not find much detail with
 respect to gluster there.

 A few questions around the problem:

 1. Can somebody please explain in detail the scenario that causes the
 problem?

 2. Is hosted engine performing synchronous writes to ensure that writes
 are durable?

 Also, if there is any documentation that details the hosted engine
 architecture that would help in enhancing our understanding of its
 interactions with gluster.


  ​

 Now my question, does this theory prevent a scenario of perhaps
 something like a gluster replicated volume being mounted as a glusterfs
 filesystem and then re-exported as the native kernel NFS share for the
 hosted-engine to consume? It could then be possible to chuck ctdb in
 there to provide a last resort failover solution. I have tried myself
 and suggested it to two people who are running a similar setup. Now
 using the native kernel NFS server for hosted-engine and they haven't
 reported as many issues. Curious, could anyone validate my theory on this?


 If we obtain more details on the use case and obtain gluster logs from the
 failed scenarios, we should be able to understand the problem better. That
 could be the first step in validating your theory or evolving further
 recommendations :).


​I'm not sure how useful this is, but ​Jiri Moskovcak tracked this down in
an off list message.

​Message Quote:​

​==​

​We were able to track it down to this (thanks Andrew for providing the
testing setup):

-b686-4363-bb7e-dba99e5789b6/ha_agent service_type=hosted-engine'
Traceback (most recent call last):
  File 
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/listener.py,
line 165, in handle
response = success  + self._dispatch(data)
  File 
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/listener.py,
line 261, in _dispatch
.get_all_stats_for_service_type(**options)
  File 
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py,
line 41, in get_all_stats_for_service_type
d = self.get_raw_stats_for_service_type(storage_dir, service_type)
  File 
/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py,
line 74, in get_raw_stats_for_service_type
f = os.open(path, direct_flag | os.O_RDONLY)
OSError: [Errno 116] Stale file handle: '/rhev/data-center/mnt/localho
st:_mnt_hosted-engine/c898fd2a-b686-4363-bb7e-dba99e5789b6/ha_agent/hosted-
engine.metadata'

It's definitely connected to the storage which leads us to the gluster, I'm
not very familiar with the gluster so I need to check this with our gluster
 gurus.​

​==​



 Thanks,
 Vijay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Relationship bw storage domain uuid/images/children and VM's

2014-07-18 Thread Steve Dainard
Hi Yair,

I have 26 disks listed in the UI for this particular storage domain, all of
them are attached to VM's.

So should I assume the other images are stale or do they perform another
function other than disk images? Here's a list of the directories, some of
them are using next to no storage space:

12G ./01cebaac-1c2f-4aeb-9057-4a451248a5b1
1.1M ./2a1db231-0450-4eee-9441-fe645dcbf065
11G ./08560ad8-9382-45a6-b493-a1f1ee755cad
41G ./112e7b3b-7a59-4adf-9316-ad2edf4a26b1
23G ./143edfb0-5bc0-4540-8f90-118bc67d19a6
27G ./1639745e-a694-4c04-a08c-c253da240b07
1.1M ./1f3c4f75-8d46-469c-8bb7-0f88f049989f
4.0K ./23ceb32a-c60b-4093-ba2d-e174251ff777
2.0G ./250d3ea6-885c-421e-81a1-1235ec166f9e
1.9G ./3719764e-1de6-4969-beb1-042dbd57207b
4.0K ./3d02d08d-bf74-4700-930a-0b403ebb0baa
26G ./3d24c288-a58e-405f-8b48-b6f6548d3f78
1.6G ./3e3e78ac-2bbd-4a0e-95cb-dbcd56689b5f
4.0K ./4b866fda-983f-490c-ae4d-e3b0f289a660
4.0K ./4df4be7a-4822-48f7-adf1-81e94a51d3a3
26G ./54840f6f-721c-4762-8622-2f3091c1ff32
4.0K ./58c9c4d1-b33d-4bcc-a970-22d6c9c45e02
1.1M ./5915377a-051d-40af-aeb9-2415edfd61e2
4.0K ./6c26fad9-0f93-4c33-89f5-59b3facac2c2
151G ./6cc879e9-de08-4736-9c1b-7c6509cb7076
2.3G ./746f4935-0b42-4d3c-a2f0-053545c9ced3
11G ./7b7765e3-9165-409c-83e3-c83b34fa26d4
1.1M ./7e8b55bb-43c2-4029-8d57-ca6b30c25377
12G ./816229e4-ed97-456c-b5a1-c8f48c465666
4.0K ./8afd84b4-e4d2-465b-9185-e62daef7a20d
11G ./8e1070f2-46f5-4910-a2d2-a9b188400079
11G ./a337e372-fab1-492a-80fd-809dbdc0d58f
1.1M ./a5df03ed-6136-4420-ae9a-a38aff1ed773
41G ./a99a69ad-4b96-480a-a44f-f2b68920b18e
11G ./aa588a8e-d7d4-42af-ae72-589ddb90d950
41G ./aeadcdc3-0f19-4e5d-b801-c863786b9cba
1.1M ./ba2e9642-0464-42be-8151-1b4886361354
161G ./c23b91d1-16b0-4341-ac66-b2c096f51912
4.0K ./d74dcf2a-6a4f-4b8b-9ec9-3ddf7ec76ab2
41G ./e206ed1a-aba9-4979-9269-4cc4b0dcaeef
11G ./e663ce3f-294c-4fa7-abd1-2789bb8a49e3
27G ./ee6a657a-f866-438c-9c7f-807f8dfad015
26G ./eeae5c97-22b9-4f4d-9443-1d9bdee89e08
827M ./efc18ddc-8efe-460e-bd5b-efeed870f03e
11G ./f0a0e060-08bd-4aeb-a098-a4b4bb906dae
4.0K ./f1882adf-2ee5-46db-a0af-79bf315ea01c
41G ./f2608aa0-f546-41df-ada5-a187f8504d8c
22G ./f9219e9f-9ba7-4fd8-bda6-76c0e1b22409
11G ./fa922294-bc3a-4168-911a-81ab928ad2e4
4.0K ./ffe5d180-1422-4d54-aa14-f2a18abc12d9
71G ./6e65b112-92f9-4093-aa3d-b904a6e0aba6
1.8G ./54e01fa1-ec68-4a7c-a436-4337d03f4eca
1.3G ./2259197a-4b41-4119-b764-3391907b8aa7
1.3G ./2680c930-1da9-4dfa-b1f2-a274ecc76634
1.1M ./fecd55ac-1c22-40d9-a65c-54afd6c0a47b
1.1M ./94c2c3b1-4e9c-4acd-9c1c-2aeee0a410fc
1.3G ./665cc540-d265-41e7-ba74-70ba4173b45f
1.1M ./1dfa8b6a-9f01-4dfa-81ec-65943226120d
1.3G ./b1de4bd9-efd5-4cfe-bb79-f3355b0bf331
3.2G ./6edab781-7f9b-4c4d-beb2-1bf197f325d1
1.3G ./3a31fa00-93ea-4f97-ace7-d3ab66f2bf7b
1.1M ./c7bf555e-3a22-4ac0-9376-35109c87a340
1.1M ./62369fed-ac6a-43dc-bf7b-51fe2e372ada
2.2G ./103a534c-81fe-4ea7-8f63-aad884585b27
1.1M ./a3f45630-5e92-459f-abbe-69eb854c54c4
1.9G ./29bc03d0-cb10-4b11-9ead-a7a45aabcd0b
1.1M ./d8499c89-9285-423e-bbce-13f6d4d49e95
2.1G ./622cc7c7-9f56-4c27-a80b-8d2acf7c9028
1.1M ./422542d8-a76b-4d53-9c12-88a4f6a7c395
615M ./c4f8db4e-75d3-4996-a6a5-25058945f9cb
1.1M ./cf6827b1-96bc-4804-9095-7f91f378aaf3
913M ./33644031-95a2-4796-8f19-897358378bd7
1.1M ./2574b42f-491d-4ab8-9bfb-90d12a72a9dd
802M ./1f08a684-c889-48ec-a27e-2b538be662dd
1.4G ./c05bf95d-2d82-49b9-9f63-79175bf48143
1.1M ./13badb26-e767-46fe-87aa-91659219f9fc
313M ./2433efb2-642e-4a8e-9b35-a7266bbad021
1.1M ./ee51a0fb-ab44-45fb-b9d9-ce3a70c541fa
1.4G ./205128fe-d632-4800-a65e-10345615e6a3
1.1M ./54318bb5-150d-4f01-9f66-ac347405082d
2.3G ./1db75bb9-9fb5-4d7a-b0a5-2062444c6add
1.1M ./7b302f06-7171-4b56-adc1-2eef88297a4a
2.3G ./2907aeb3-c911-40b9-9644-52960fc442c8
1.1M ./50eef668-1a08-4575-8bd6-c0dabde1bc1d
2.4G ./8b42c3e4-dad9-4d0f-a34d-cf8b4e1541c3
1.1M ./50c7c31c-84ca-4c59-a612-5cec080bf711
1.1M ./637dd0e3-003a-40f6-a067-da7e06106caa
848M ./7905b528-268e-4651-995f-d120cdc69be9
1.1M ./3f72993d-faca-4a7f-bb56-f73ebec0322a
973M ./44347211-a8b9-4b8c-92e9-001658c4467b
1.1M ./b2583917-e2de-42ab-8981-f91b9bb38583
832M ./941d2b20-1599-477f-a5c0-5d0dedc1671b
28G ./e69cbc6a-eb2d-4c9e-acc0-e41e07495913
314M ./d0784982-c2b2-428e-b9ac-15de505ae5e6
1.1M ./2a3232ad-1591-4965-b706-0558bd3f9039
316M ./c7400bdf-ab09-48f3-82e6-961426441033
1.1M ./53836e2a-b2d3-4481-87d0-66d67b4c5bac
316M ./929a04f9-b13f-496d-ae1c-4ebe4b211276
1.1M ./0d87888e-1a37-4c2c-a7bd-12da512b440c
318M ./16fec781-b0d9-40a1-b7b3-ccb87d7d3e6e
1.1M ./d3151afc-a504-460e-babe-943e20866991
318M ./2b728cc2-98a3-4446-94da-bc8e103f8b4a
1.1M ./172429ea-7baf-4396-ad10-e1f001bc4692
318M ./ed02c25f-705f-4bd5-8819-2033d3837c7b
1.1M ./e864299e-712a-430d-bd73-4a2d9db1ad04
318M ./e576ab0d-f854-4283-86a2-c573bf840c46
1.1M ./2e18fbb5-ee09-4a23-9372-77ebd5df4ea4
318M ./55ee8290-a44a-4f86-a8de-bfd2b226604b
1.1M ./7a9591d3-ab97-4a14-ab62-6000acb45159
322M ./3ce91c48-8572-45a0-bc6c-93a19ef3aeb7
931G .

And the api returns 29 disks (3 are from another SD):
disk 

Re: [ovirt-users] Ceph support

2014-07-18 Thread Itamar Heim

On 07/18/2014 12:36 PM, Johan Kooijman wrote:

Hi all,

Does somebody have *any* idea on when oVirt will start to support Ceph
through libvirtd? I could mount an RBD volume onto a server and then
expert it as NFS, but that kills my I/O throughput quite severely (write
IOPS go down by 84%).

Ceph is the way to go for storage needs if you ask me, but I'd rather
not move away from ovirt because there's no support.


I expect we'll see Ceph in 3.6.
TBD if direct or via Cinder integration.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Server Engine and Host Server at the same Hardware

2014-07-18 Thread William Juliano Gutierres
Hi,

I'm testing the oVirt and reanding the documentation I discovered that
oVirt needs two separeted servers: the Engine and at least one host server.
So, I have two main questions:


   - No 1: I have just one good hardware enough to run this solution. So,
   before I go out running for an another new hardware, I would like to know
   if is that possible run the oVirt Engine and the oVirt Host  Server in the
   same hardware (install the engine, and when add a host, point to the same
   Engine address). Is it possible?
   - No 2: I read about the self-hosted engine, but it was not clear how do
   I start it (assuming that I have one hardware, wich one I install first?).
   And if it is possible in that way, I would like to know about you if you
   have a normal performance doing this way.

Thank you!

Atenciosamente,

 

* William Gutierres  william@gmail.com
william@gmail.com *
Antes de imprimir este e-mail, pense em sua responsabilidade e compromisso
com o meio ambiente.

--

Esta mensagem, incluindo quaisquer anexos, é confidencial e pode conter
informações privilegiadas. Se você a recebeu por engano, favor notificar o
autor retornando o email e deletando-o do seu sistema. Qualquer uso não
autorizado ou disseminação desta mensagem, inteira ou parcial, é
estritamente proibido. As ideias contidas nesta mensagem ou em seus anexos
não necessariamente refletem a opinião de William Gutierres.
--
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Server Engine and Host Server at the same Hardware

2014-07-18 Thread Maurice James
That is the way that I run my setup and it works fine. There is a little over 
head with the engine though. If you have enough RAM and cpu power to spare it 
shouldnt be a problem 




- Original Message -

From: William Juliano Gutierres william@gmail.com 
To: users@ovirt.org 
Sent: Friday, July 18, 2014 12:49:08 PM 
Subject: [ovirt-users] Server Engine and Host Server at the same Hardware 

Hi, 

I'm testing the oVirt and reanding the documentation I discovered that oVirt 
needs two separeted servers: the Engine and at least one host server. So, I 
have two main questions: 



* No 1: I have just one good hardware enough to run this solution. So, 
before I go out running for an another new hardware, I would like to know if is 
that possible run the oVirt Engine and the oVirt Host Server in the same 
hardware (install the engine, and when add a host, point to the same Engine 
address). Is it possible? 
* No 2: I read about the self-hosted engine, but it was not clear how do I 
start it (assuming that I have one hardware, wich one I install first?). And if 
it is possible in that way, I would like to know about you if you have a normal 
performance doing this way. 
Thank you! 

Atenciosamente, 


 

William Gutierres 

 

william@gmail.com 

Antes de imprimir este e-mail, pense em sua responsabilidade e compromisso com 
o meio ambiente. 


--
 

Esta mensagem, incluindo quaisquer anexos, é confidencial e pode conter 
informações privilegiadas. Se você a recebeu por engano, favor notificar o 
autor retornando o email e deletando-o do seu sistema. Qualquer uso não 
autorizado ou disseminação desta mensagem, inteira ou parcial, é estritamente 
proibido. As ideias contidas nesta mensagem ou em seus anexos não 
necessariamente refletem a opinião de William Gutierres. 
--
 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Server Engine and Host Server at the same Hardware

2014-07-18 Thread Maurice James
That is the way that I run my setup and it works fine. There is a little over 
head with the engine though. If you have enough RAM and cpu power to spare it 
shouldnt be a problem 





From: William Juliano Gutierres william@gmail.com 
To: users@ovirt.org 
Sent: Friday, July 18, 2014 12:49:08 PM 
Subject: [ovirt-users] Server Engine and Host Server at the same Hardware 

Hi, 

I'm testing the oVirt and reanding the documentation I discovered that oVirt 
needs two separeted servers: the Engine and at least one host server. So, I 
have two main questions: 



* No 1: I have just one good hardware enough to run this solution. So, 
before I go out running for an another new hardware, I would like to know if is 
that possible run the oVirt Engine and the oVirt Host Server in the same 
hardware (install the engine, and when add a host, point to the same Engine 
address). Is it possible? 
* No 2: I read about the self-hosted engine, but it was not clear how do I 
start it (assuming that I have one hardware, wich one I install first?). And if 
it is possible in that way, I would like to know about you if you have a normal 
performance doing this way. 
Thank you! 

Atenciosamente, 


 

William Gutierres 

 

william@gmail.com 

Antes de imprimir este e-mail, pense em sua responsabilidade e compromisso com 
o meio ambiente. 


--
 

Esta mensagem, incluindo quaisquer anexos, é confidencial e pode conter 
informações privilegiadas. Se você a recebeu por engano, favor notificar o 
autor retornando o email e deletando-o do seu sistema. Qualquer uso não 
autorizado ou disseminação desta mensagem, inteira ou parcial, é estritamente 
proibido. As ideias contidas nesta mensagem ou em seus anexos não 
necessariamente refletem a opinião de William Gutierres. 
--
 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Online backup options

2014-07-18 Thread Steve Dainard
Anyone?


On Fri, Jun 20, 2014 at 9:35 AM, Steve Dainard sdain...@miovision.com
wrote:

 Hello Ovirt team,

 Reading this bulletin: https://access.redhat.com/site/solutions/117763
 there is a reference to 'private Red Hat Bug # 523354' covering online
 backups of VM's.

 Can someone comment on this feature, and rough timeline? Is this a native
 backup solution that will be included with Ovirt/RHEV?

 Is this Ovirt feature where the work is being done?
 http://www.ovirt.org/Features/Backup-Restore_API_Integration It seems
 like this may be a different feature specifically for 3rd party backup
 options.

 Thanks,
 Steve

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt + opennode or oVirt+proxmox

2014-07-18 Thread Robson Kobayashi - TRE/MS
Has anyone using proxmox or opennode with oVirt? Is that possible? 

!- 
Robson Massaki Kobayashi 
!- 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt + opennode or oVirt+proxmox

2014-07-18 Thread Dan Yasny
With? why/how?


On Fri, Jul 18, 2014 at 7:12 PM, Robson Kobayashi - TRE/MS 
robson.kobaya...@tre-ms.jus.br wrote:

 Has anyone using proxmox or opennode with oVirt? Is that possible?

 !-
 Robson Massaki Kobayashi
 !-


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-devel] Can we debug some truths/myths/facts about hosted-engine and gluster?

2014-07-18 Thread Andrew Lau
On Sat, Jul 19, 2014 at 12:03 AM, Pranith Kumar Karampuri 
pkara...@redhat.com wrote:


 On 07/18/2014 05:43 PM, Andrew Lau wrote:

  ​ ​

  On Fri, Jul 18, 2014 at 10:06 PM, Vijay Bellur vbel...@redhat.com
 wrote:

 [Adding gluster-devel]


 On 07/18/2014 05:20 PM, Andrew Lau wrote:

 Hi all,

 As most of you have got hints from previous messages, hosted engine
 won't work on gluster . A quote from BZ1097639

 Using hosted engine with Gluster backed storage is currently something
 we really warn against.


 I think this bug should be closed or re-targeted at documentation,
 because there is nothing we can do here. Hosted engine assumes that all
 writes are atomic and (immediately) available for all hosts in the cluster.
 Gluster violates those assumptions.
 ​

  I tried going through BZ1097639 but could not find much detail with
 respect to gluster there.

 A few questions around the problem:

 1. Can somebody please explain in detail the scenario that causes the
 problem?

 2. Is hosted engine performing synchronous writes to ensure that writes
 are durable?

 Also, if there is any documentation that details the hosted engine
 architecture that would help in enhancing our understanding of its
 interactions with gluster.


 ​

 Now my question, does this theory prevent a scenario of perhaps
 something like a gluster replicated volume being mounted as a glusterfs
 filesystem and then re-exported as the native kernel NFS share for the
 hosted-engine to consume? It could then be possible to chuck ctdb in
 there to provide a last resort failover solution. I have tried myself
 and suggested it to two people who are running a similar setup. Now
 using the native kernel NFS server for hosted-engine and they haven't
 reported as many issues. Curious, could anyone validate my theory on
 this?


  If we obtain more details on the use case and obtain gluster logs from
 the failed scenarios, we should be able to understand the problem better.
 That could be the first step in validating your theory or evolving further
 recommendations :).


  ​I'm not sure how useful this is, but ​Jiri Moskovcak tracked this down
 in an off list message.

  ​Message Quote:​

  ​==​

   ​We were able to track it down to this (thanks Andrew for providing the
 testing setup):

 -b686-4363-bb7e-dba99e5789b6/ha_agent service_type=hosted-engine'
 Traceback (most recent call last):
   File 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/listener.py,
 line 165, in handle
 response = success  + self._dispatch(data)
   File 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/listener.py,
 line 261, in _dispatch
 .get_all_stats_for_service_type(**options)
   File 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py,
 line 41, in get_all_stats_for_service_type
 d = self.get_raw_stats_for_service_type(storage_dir, service_type)
   File 
 /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py,
 line 74, in get_raw_stats_for_service_type
 f = os.open(path, direct_flag | os.O_RDONLY)
 OSError: [Errno 116] Stale file handle: '/rhev/data-center/mnt/localho
 st:_mnt_hosted-engine/c898fd2a-b686-4363-bb7e-dba99e5789b6/ha_agent/hosted
 -engine.metadata'

 Andrew/Jiri,
 Would it be possible to post gluster logs of both the mount and
 bricks on the bz? I can take a look at it once. If I gather nothing then
 probably I will ask for your help in re-creating the issue.

 Pranith


​Unfortunately, I don't have the logs for that setup any more.. ​I'll try
replicate when I get a chance. If I understand the comment from the BZ, I
don't think it's a gluster bug per-say, more just how gluster does its
replication.





 It's definitely connected to the storage which leads us to the gluster,
 I'm not very familiar with the gluster so I need to check this with our
 gluster gurus.​

  ​==​



 Thanks,
 Vijay




 ___
 Gluster-devel mailing 
 listGluster-devel@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users