[ovirt-users] oVirt weekly sync cancelled this week (June 4th)

2014-06-01 Thread Doron Fediuck
Hi all,
due to various holidays this week the weekly oVirt sync will be
cancelled and we'll meet next week (June 11th).

Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] New Stats for oVirt Community

2014-06-01 Thread Brian Proffitt
The folks at Bitergia have been working on improving the community metrics 
dashboard located at ovirt.org/stats. I was just notified that, per our 
request, metrics for a big asset in our community--the #ovirt IRC--have been 
added to the dashboard. Swing by the page and see who's active on IRC and in 
the rest of the oVirt community!

Peace,
Brian

-- 
Brian Proffitt

oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Infiniband Support - Etherrnet vs Infiniband

2014-06-01 Thread Markus Stockhausen
 Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von 
 quot;René Koch [rk...@linuxland.at]
 Gesendet: Freitag, 30. Mai 2014 21:08
 An: ml ml
 Cc: users@ovirt.org
 Betreff: Re: [ovirt-users] oVirt Infiniband Support - Etherrnet vs Infiniband
 
 On 05/30/2014 04:55 PM, ml ml wrote:
  Hello List,
 
  i am very new to ovirt. I am setting up a 2 Node Cluster with GlusterFS
  Sync Replication.
 
  I was about to buy a 2-Port Intel Gbit nic for each host which is about
  150Eur each.
 
  However, for a little extra of 180Eur you get a 10GBit Infiniband -
  MCX311A-XCAT.
 
  I would like to have quick opinions on this.
 
  Should i pay a little more and then get 10MBit instead of 2GBit bonded?
 
 Infiniband is good for GlusterFS / storage connection but never never
 never never ever use it as a vm network - believe me it's a nightmare.
 
 So you would direct connect your 2 nodes as otherwise you would have to
 buy an Infiniband switch as well (btw I'm not sure if direct connect is
 possible, but I guess it is). Keep in mind that Infiniband cables aren't
 cheap, too.
 Why don't you go for 10Gbit/s Ethernet?

Welcome to Ovirt.

We are on infiniband IPoIB for about 3 years now (started with VMWare). So we
jumped on the 10GBit path long before 10GB ethernet had reasonable prices. 
We are fine with it but you should consider the following aspects.

- Go for ConnectX-2 cards or higher. They allow for IPoIB TCP offloading. Do not
believe the other information you will find on the internet. A helful statment 
here:
http://permalink.gmane.org/gmane.linux.drivers.rdma/18032

- With IPoIB infiniband you should always be happy to achieve more than 1GBit 
throughput. A hear a lot of people complaining that they do not get wire speed.
The stack is quite complex and from our experience we can reach 5-7GBit speeds 
easily on a DDR backbone (16 Gbps payload). Better CPUs will increase the 
throughput. A jump from Xeon 5400 to 5600 gave a nice boost.

- Everything around NFS over RDMA is currently a mess. 

- Cable prices are a real pain (at least in Europe). 

- Older CX4 Switches are very cheap and rock solid. We have a CISCO 7000D.
Though limited to 2044MTU they are fine.

- Infiniband requires a subnet manager on the network. Cards will only get a
link if that process is available. Either you need a switch with an builtin SM 
or
in case of direct connection it must run on one of the hosts.

- always remember that GBit bonding will not improve the throughput of one
connection. So you can easily migrate a single VM or disks over infiniband with 
much more than 1GBit while an ethernet bond will be limited to 1Gbit.

Maybe you should have a look at the 10GBit mellanox ethernet adapters. They are 
floating around for cheap prices ~$60 have a SFP+ port and are build from the
same silicone as the infiniband cards. For direct connections they should be 
fine
but switching hardware is expensive.

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Recommended setup for a FC based storage domain

2014-06-01 Thread combuster
Hi,

I have a 4 node cluster setup and my storage options right now are a FC based 
storage, one partition per node on a local drive (~200GB each) and a NFS based 
NAS device. I want to setup export and ISO domain on the NAS and there are no 
issues or questions regarding those two. I wasn't aware of any other options 
at the time for utilizing a local storage (since this is a shared based 
datacenter) so I exported a directory from each partition via NFS and it 
works. But I am little in the dark with the following:

1. Are there any advantages for switching from NFS based local storage to a 
Gluster based domain with blocks for each partition. I guess it can be only 
performance wise but maybe I'm wrong. If there are advantages, are there any 
tips regarding xfs mount options etc ?

2. I've created a volume on the FC based storage and exported it to all of the 
nodes in the cluster on the storage itself. I've configured multipathing 
correctly and added an alias for the wwid of the LUN so I can distinct this 
one and any other future volumes more easily. At first I created a partition 
on it but since oVirt saw only the whole LUN as raw device I erased it before 
adding it as the FC master storage domain. I've imported a few VM's and point 
them to the FC storage domain. This setup works, but:

- All of the nodes see a device with the alias for the wwid of the volume, but 
only the node wich is currently the SPM for the cluster can see logical 
volumes inside. Also when I setup the high availability for VM's residing on 
the FC storage and select to start on any node on the cluster, they always 
start on the SPM. Can multiple nodes run different VM's on the same FC storage 
at the same time (logical thing would be that they can, but I wanted to be 
sure first). I am not familiar with the logic oVirt utilizes that locks the 
vm's logical volume to prevent corruption.

- Fdisk shows that logical volumes on the LUN of the FC volume are missaligned 
(partition doesn't end on cylindar boundary), so I wonder if this is becuase I 
imported the VM's with disks that were created on local storage before and 
that any _new_ VM's with disks on the fc storage would be propperly aligned.

This is a new setup with oVirt 3.4 (did an export of all the VM's on 3.3 and 
after a fresh installation of the 3.4 imported them back again). I have room 
to experiment a little with 2 of the 4 nodes because currently they are free 
from running any VM's, but I have limited room for anything else that would 
cause an unplanned downtime for four virtual machines running on the other two 
nodes on the cluster (currently highly available and their drives are on the 
FC storage domain). All in all I have 12 VM's running and I'm asking on the 
list for advice and guidance before I make any changes.

Just trying to find as much info regarding all of this as possible before 
acting upon.

Thank you in advance,

Ivan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Updated infra procedures

2014-06-01 Thread Eyal Edri


- Original Message -
 From: David Caro dcaro...@redhat.com
 To: users@ovirt.org, de...@ovirt.org, infra in...@ovirt.org
 Sent: Thursday, May 29, 2014 8:50:58 PM
 Subject: [ovirt-users] Updated infra procedures
 
 Hi everyone!
 
 From now on we want and to be able to give the best response possible to any
 issue, we want to start using trac to manage any request we get, so from now
 on,
 please open a ticket in the trac [1] whenever you need something to get done,
 of
 course, if you ping us on irc we will still respond and if it's a critical
 issue
 or a very trivial one we might solve it on the fly, but for any normal infra
 issue please open a trac ticket.
 
 The wiki page has been updated with that information too (thanks eyal) [2]

np, +1 for the initiative, i think infra team can provide much better support,
if all issues will be reported via the trac.

thanks,

Eyal.

 
 [1] https://fedorahosted.org/ovirt/newticket
 [2] http://www.ovirt.org/Category:Infrastructure#How_we_work
 
 Thanks!
 
 --
 David Caro
 
 Red Hat S.L.
 Continuous Integration Engineer - EMEA ENG Virtualization RD
 
 Email: dc...@redhat.com
 Web: www.redhat.com
 RHT Global #: 82-62605
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Recommended setup for a FC based storage domain

2014-06-01 Thread combuster
I need to scratch gluster off because setup is based on CentOS 6.5, so 
essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.

Any info regarding FC storage domain would be appreciated though.

Thanks

Ivan

On Sunday, 1. June 2014. 11.44.33 combus...@archlinux.us wrote:
 Hi,
 
 I have a 4 node cluster setup and my storage options right now are a FC
 based storage, one partition per node on a local drive (~200GB each) and a
 NFS based NAS device. I want to setup export and ISO domain on the NAS and
 there are no issues or questions regarding those two. I wasn't aware of any
 other options at the time for utilizing a local storage (since this is a
 shared based datacenter) so I exported a directory from each partition via
 NFS and it works. But I am little in the dark with the following:
 
 1. Are there any advantages for switching from NFS based local storage to a
 Gluster based domain with blocks for each partition. I guess it can be only
 performance wise but maybe I'm wrong. If there are advantages, are there any
 tips regarding xfs mount options etc ?
 
 2. I've created a volume on the FC based storage and exported it to all of
 the nodes in the cluster on the storage itself. I've configured
 multipathing correctly and added an alias for the wwid of the LUN so I can
 distinct this one and any other future volumes more easily. At first I
 created a partition on it but since oVirt saw only the whole LUN as raw
 device I erased it before adding it as the FC master storage domain. I've
 imported a few VM's and point them to the FC storage domain. This setup
 works, but:
 
 - All of the nodes see a device with the alias for the wwid of the volume,
 but only the node wich is currently the SPM for the cluster can see logical
 volumes inside. Also when I setup the high availability for VM's residing
 on the FC storage and select to start on any node on the cluster, they
 always start on the SPM. Can multiple nodes run different VM's on the same
 FC storage at the same time (logical thing would be that they can, but I
 wanted to be sure first). I am not familiar with the logic oVirt utilizes
 that locks the vm's logical volume to prevent corruption.
 
 - Fdisk shows that logical volumes on the LUN of the FC volume are
 missaligned (partition doesn't end on cylindar boundary), so I wonder if
 this is becuase I imported the VM's with disks that were created on local
 storage before and that any _new_ VM's with disks on the fc storage would
 be propperly aligned.
 
 This is a new setup with oVirt 3.4 (did an export of all the VM's on 3.3 and
 after a fresh installation of the 3.4 imported them back again). I have
 room to experiment a little with 2 of the 4 nodes because currently they
 are free from running any VM's, but I have limited room for anything else
 that would cause an unplanned downtime for four virtual machines running on
 the other two nodes on the cluster (currently highly available and their
 drives are on the FC storage domain). All in all I have 12 VM's running and
 I'm asking on the list for advice and guidance before I make any changes.
 
 Just trying to find as much info regarding all of this as possible before
 acting upon.
 
 Thank you in advance,
 
 Ivan

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] yaml cloud-init custom scripts

2014-06-01 Thread Omer Frenkel


- Original Message -
 From: Nathanaël Blanchet blanc...@abes.fr
 To: users users@ovirt.org
 Sent: Friday, May 30, 2014 6:49:34 PM
 Subject: [ovirt-users] yaml  cloud-init custom scripts
 
 Hello,
 
 I've been unsuccessfully trying to make work a custom yalm script with the
 cloud-init option in 3.4.1.
 All other native UI parameters are correctly configured (hostname...)
 
 My goal is to change the keyboard layout:
 write_files:
 - content: |
 # My new /etc/sysconfig/keyboard file
 KEYTABLE=fr
 MODEL=pc105
 LAYOUT=fr
 KEYBOARDTYPE=pc
 path: /etc/sysconfig/keyboard
 permissions: '0644'
 
 I can't see what is wrong and there is nothing into cloud-init logs on the
 guest (/var/log/clou-init-output.log).

isn't it in /var/log/cloud-init.log ?

 Is it possible to see the whole yalm file generated by the engine somewhere?
 

the cloud init data is attached to the vm as a cd-rom
you can mount it locally and look inside the drive name is config-2 (look for 
it with blkid)
for me on fedora 19 it always appear in /dev/sr1 

 --
 Nathanaël Blanchet
 
 Supervision réseau
 Pôle exploitation et maintenance
 Département des systèmes d'information
 227 avenue Professeur-Jean-Louis-Viala
 34193 MONTPELLIER CEDEX 5
 Tél. 33 (0)4 67 54 84 55
 Fax  33 (0)4 67 54 84 14 blanc...@abes.fr
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM external-VM name was created by Non interactive use

2014-06-01 Thread Gilad Chaplik
- Original Message -
 From: Nathanaël Blanchet blanc...@abes.fr
 To: users@ovirt.org
 Sent: Friday, May 30, 2014 4:13:00 PM
 Subject: Re: [ovirt-users] VM external-VM name was created by Non interactive 
 use
 
 I've just tried to remove external-azerty vm afer deleting azerty, but
 external-azerty still always comes back ! I tried after restarting engine
 but I seems I can't remove it anymore!

AFAIK, the external vm comes from the host- i.e you have a running vm within 
the host that wasn't initialized by your particular engine.
Try to destroy it using vdsClient iface in the host.

 
 Le 30/05/2014 15:06, Nathanaël Blanchet a écrit :
 
 
 Hello,
 
 I imported a vm as a template from the glance public repo included in ovirt
 3.4. When creating a new VM (azerty) based on this template, I run once
 with cloud-init. The first boot behaves normally and my vm azerty goes up.
 When trying to reboot, it always fails and I can't use it anymore. In
 addition, a new empty externel-azerty vm is created with the external
 prefix by a mysterious non interactive user. When I delete it, it is
 automatically recreated unless I remove azerty.
 Can anybody explain?
 
 2014-05-30 14:30:56,249 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (DefaultQuartzScheduler_Worker-63) [34c4344f] Correlation ID: 34c4344f, Job
 ID: ab2120a6-82e4-4312-a288-fa3e127cc9bc, Call Stack: null, Custom Event ID:
 -1, Message: VM external-azerty was created by Non interactive user.
 2014-05-30 14:30:56,254 INFO
 [org.ovirt.engine.core.bll.AddVmFromScratcCommand]
 (DefaultQuartzScheduler_Worker-63) [34c4344f] Lock freed to object
 EngineLock [exclusiveLocks= key: external-azerty value: VM_NAME
 , sharedLocks= ]
 --
 Nathanaël Blanchet
 
 Supervision réseau
 Pôle exploitation et maintenance
 Département des systèmes d'information
 227 avenue Professeur-Jean-Louis-Viala
 34193 MONTPELLIER CEDEX 5
 Tél. 33 (0)4 67 54 84 55
 Fax  33 (0)4 67 54 84 14 blanc...@abes.fr
 
 
 ___
 Users mailing list Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 --
 Nathanaël Blanchet
 
 Supervision réseau
 Pôle exploitation et maintenance
 Département des systèmes d'information
 227 avenue Professeur-Jean-Louis-Viala
 34193 MONTPELLIER CEDEX 5
 Tél. 33 (0)4 67 54 84 55
 Fax  33 (0)4 67 54 84 14 blanc...@abes.fr
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] yaml cloud-init custom scripts

2014-06-01 Thread Shahar Havivi
On 01.06.14 11:20, Omer Frenkel wrote:
 
 
 - Original Message -
  From: Nathanaël Blanchet blanc...@abes.fr
  To: users users@ovirt.org
  Sent: Friday, May 30, 2014 6:49:34 PM
  Subject: [ovirt-users] yaml  cloud-init custom scripts
  
  Hello,
  
  I've been unsuccessfully trying to make work a custom yalm script with the
  cloud-init option in 3.4.1.
  All other native UI parameters are correctly configured (hostname...)
  
  My goal is to change the keyboard layout:
  write_files:
  - content: |
  # My new /etc/sysconfig/keyboard file
  KEYTABLE=fr
  MODEL=pc105
  LAYOUT=fr
  KEYBOARDTYPE=pc
  path: /etc/sysconfig/keyboard
  permissions: '0644'
  
  I can't see what is wrong and there is nothing into cloud-init logs on the
  guest (/var/log/clou-init-output.log).
 
 isn't it in /var/log/cloud-init.log ?
 
  Is it possible to see the whole yalm file generated by the engine somewhere?
  
 
 the cloud init data is attached to the vm as a cd-rom
 you can mount it locally and look inside the drive name is config-2 (look for 
 it with blkid)
 for me on fedora 19 it always appear in /dev/sr1 
First look at the mounted content as Omer suggested and take a look if the YAML
output as you expected.
You can also try to set the data that is written in the custom script
commit:
http://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=commit;h=d46ebd8712f369b924e199b82197e59568a686ff
see if adding this content is working for your setup:
write_files:
-   content: |
# some file content
path: /root/myfile

This sample was tested and worked under RHEL 6 and Fedora 19.
Please note that you need to add a tab/4 spaces as shown above.

What OS and what version of cloud-init you are using?
 
  --
  Nathanaël Blanchet
  
  Supervision réseau
  Pôle exploitation et maintenance
  Département des systèmes d'information
  227 avenue Professeur-Jean-Louis-Viala
  34193 MONTPELLIER CEDEX 5
  Tél. 33 (0)4 67 54 84 55
  Fax  33 (0)4 67 54 84 14 blanc...@abes.fr
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Create two Node Cluster

2014-06-01 Thread ml ml
Hello Rene,

yes, the package is installed:

[root@ovirt-node01 ~]# rpm -qa | grep vdsm
vdsm-gluster-4.14.6-0.el6.noarch
vdsm-python-zombiereaper-4.14.6-0.el6.noarch
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-4.14.6-0.el6.x86_64
vdsm-cli-4.14.6-0.el6.noarch
vdsm-python-4.14.6-0.el6.x86_64


[root@ovirt-node02 ~]# rpm -qa | grep vdsm
vdsm-python-zombiereaper-4.14.6-0.el6.noarch
vdsm-xmlrpc-4.14.6-0.el6.noarch
vdsm-4.14.6-0.el6.x86_64
vdsm-cli-4.14.6-0.el6.noarch
vdsm-gluster-4.14.6-0.el6.noarch
vdsm-python-4.14.6-0.el6.x86_64

How else can we debug this? What infos do you need to help?

Thanks a lot,
Mario



On Sun, Jun 1, 2014 at 7:17 PM, René Koch rk...@linuxland.at wrote:

 Hi,


 On 31.05.2014 19:29, ml ml wrote:

 Hello List,

 i just installed 3.4.

 1.) I created a DataCenter:
 --
 Type: Shared
 Compatibility Versio: 3.4
 Quota Mode: disabled


 2.) i created the Cluster
 --
 Compatibility Version: 3.4
 Enable Virt Service: yes
 Enable Gluster Service: yes

 3.) i added two hosts
 -
 - they are up and running


 Do you have vdsm-gluster installed on your hosts?



 How can i now add my volumes and bricks?
 I dont have the exspected Volumes Tab to add Bricks.

 If i go to Storage i can only add Data / GlusterFS with a path to
 mount glusterfs. I guess this is for already existing gluster shares?

 Does it not work like this on page 15,16,17?


 It should work like this - I did such a setup last week but with dedicated
 gluster und virtualization clusters. Volume tab appeared after enabling
 Gluster service in my gluster cluster.


 http://www.ovirt.org/images/5/59/Ovirt-fosdem2013-gluster.pdf

 Thanks,
 Mario



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Create two Node Cluster

2014-06-01 Thread ml ml
Hello Rene,

i got help on the irc channel by jvandewege. It tourned out that glusterd
was only running on one node (dunno why just on one node) and that the
glusterd was not enabled as a service on both nodes (centos 6.5).

Should this be added to the wiki or fixed?

Thanks,
Mario


On Sun, Jun 1, 2014 at 9:11 PM, ml ml mliebher...@googlemail.com wrote:

 Hello Rene,

 yes, the package is installed:

 [root@ovirt-node01 ~]# rpm -qa | grep vdsm
 vdsm-gluster-4.14.6-0.el6.noarch
 vdsm-python-zombiereaper-4.14.6-0.el6.noarch
 vdsm-xmlrpc-4.14.6-0.el6.noarch
 vdsm-4.14.6-0.el6.x86_64
 vdsm-cli-4.14.6-0.el6.noarch
 vdsm-python-4.14.6-0.el6.x86_64


 [root@ovirt-node02 ~]# rpm -qa | grep vdsm
 vdsm-python-zombiereaper-4.14.6-0.el6.noarch
 vdsm-xmlrpc-4.14.6-0.el6.noarch
 vdsm-4.14.6-0.el6.x86_64
 vdsm-cli-4.14.6-0.el6.noarch
 vdsm-gluster-4.14.6-0.el6.noarch
 vdsm-python-4.14.6-0.el6.x86_64

 How else can we debug this? What infos do you need to help?

 Thanks a lot,
 Mario



 On Sun, Jun 1, 2014 at 7:17 PM, René Koch rk...@linuxland.at wrote:

 Hi,


 On 31.05.2014 19:29, ml ml wrote:

 Hello List,

 i just installed 3.4.

 1.) I created a DataCenter:
 --
 Type: Shared
 Compatibility Versio: 3.4
 Quota Mode: disabled


 2.) i created the Cluster
 --
 Compatibility Version: 3.4
 Enable Virt Service: yes
 Enable Gluster Service: yes

 3.) i added two hosts
 -
 - they are up and running


 Do you have vdsm-gluster installed on your hosts?



 How can i now add my volumes and bricks?
 I dont have the exspected Volumes Tab to add Bricks.

 If i go to Storage i can only add Data / GlusterFS with a path to
 mount glusterfs. I guess this is for already existing gluster shares?

 Does it not work like this on page 15,16,17?


 It should work like this - I did such a setup last week but with
 dedicated gluster und virtualization clusters. Volume tab appeared after
 enabling Gluster service in my gluster cluster.


 http://www.ovirt.org/images/5/59/Ovirt-fosdem2013-gluster.pdf

 Thanks,
 Mario



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] glusterfs resume vm paused state

2014-06-01 Thread Andrew Lau
Hi,

Has anyone had any luck with resuming a VM from a paused state on top
of NFS share? Even when the VMs are marked as HA, if the gluster
storage goes down for a few seconds the VMs go to a paused state and
can never be resumed. They require a hard reset.

I recall when using NFS to not have this issue.

Thanks,
Andrew
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users