Re: [ovirt-users] Network Security / Seperation

2014-04-25 Thread Ovirt User
are you sure that you want give L3 to a neutron server instead of a real router 
?

Il giorno 24/apr/2014, alle ore 09:08, squadra squa...@gmail.com ha scritto:

 Hi Folks,
 
 i am currently looking for a way to isolate each vms network traffic
 so none can sniff others network traffic. currently i am playing
 around with the neutron integration, which gives me more question
 marks than answers for now (even documentation seems to be incomplete
 / outdated).
 
 Is there any other solution, which does not require to create a new
 vlan for each vm, to make sure that noone can sniff others traffic?
 
 Cheers,
 
 Juergen
 
 -- 
 Sent from the Delta quadrant using Borg technology!
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.4 hosted-engine static IP

2014-04-25 Thread Sven Kieske
I'd encourage an answer to this topic.
I didn't dive into hosted engine
yet myself but when I do, I'll start
with a static ip too.

So it seems a little strange to just
support dhcp.

I think there must be a way for static
networking too.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.4 hosted-engine static IP

2014-04-25 Thread Doron Fediuck


- Original Message -
 From: Klas Mattsson k...@hekla.nu
 To: users@ovirt.org
 Sent: Thursday, April 24, 2014 5:44:51 PM
 Subject: [ovirt-users] ovirt 3.4 hosted-engine static IP
 
 Hello, new user here!
 
 I've noticed that the virtual interface ovirtmgmt gets an IP address by
 DHCP.
 Is there any good way to make it static instead?
 
 I've seen a workaround which is basically:
 run hosted-engine --deploy
 Let it configure the management interface.
 Cancel
 Change the IP adress to static in:
 /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
 Run hosted-engine --deploy again.
 
 Would this be the current appropriate solution?
 
 Also, is it possible to change this later on, after the installation is
 finished?
 
 I would appreciate any assistance.
 
 With my best regards
 Klas Mattsson

Hi Klas,
the installer is using an existing interface for the bridge.
The idea is that an administrator would first configure the host networking
and only then download and install the software.
We have many cases where admins use vlans and bonds, and static IP is probably
the easiest case ;)

So the right way is to first configure your eth/em interface with your 
networking
preferences (static IP in this case), and then yum install hosted engine.

Yes, it is possible to change it during the process and afterwards (need to 
shut down
relevant services) but the best way is configuring network first.

Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.4 hosted-engine static IP

2014-04-25 Thread Doron Fediuck


- Original Message -
 From: Sven Kieske s.kie...@mittwald.de
 To: users@ovirt.org
 Sent: Friday, April 25, 2014 9:57:03 AM
 Subject: Re: [ovirt-users] ovirt 3.4 hosted-engine static IP
 
 I'd encourage an answer to this topic.
 I didn't dive into hosted engine
 yet myself but when I do, I'll start
 with a static ip too.
 
 So it seems a little strange to just
 support dhcp.
 
 I think there must be a way for static
 networking too.
 
 --
 Mit freundlichen Grüßen / Regards
 
 Sven Kieske
 
 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

I agree Sven, and you probably missed my response ;)

The simple answer is that there are endless networking typologies and 
configurations,
so most admins would first configure the relevant networking and only then 
install
additional software (ovirt in this case).

See my other reply, and let's discuss it there if needed.

Doron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem deploying host

2014-04-25 Thread Alon Bar-Lev


- Original Message -
 From: Juan Jose jj197...@gmail.com
 To: users@ovirt.org
 Sent: Friday, April 25, 2014 12:10:36 PM
 Subject: [ovirt-users] Problem deploying host
 
 Hello everybody,
 
 I'm trying to add a new host to my oVirt infrastructure and in the process of
 installing host I receive the errors:
 
   
 
   
 Host host3 installation failed. Command returned failure code 1 during SSH
 session 'r...@ovirt-host3.siee.local'
 
 My host is a CentOS release 6.5 (Final) and the installation log file
 /var/log/ovirt-engine/host-deploy/ovirt-20140425105003-ovirt-host3.siee.local-b98f5fc.log
 says:
 
 2014-04-25 10:52:41 INFO otopi.plugins.ovirt_host_deploy.vdsm.hardware
 hardware._validate_virtualization:71 Hardware supports virtualization
 2014-04-25 10:52:41 DEBUG otopi.context context._executeMethod:123 Stage
 validation METHOD
 otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._validation
 Loading mirror speeds from cached hostfile
 * base: ftp.cica.es
 * epel: ftp.cica.es
 * extras: ftp.cica.es
 * updates: ftp.cica.es
 2014-04-25 10:52:41 DEBUG otopi.context context._executeMethod:137 method
 exception
 Traceback (most recent call last):
 File /tmp/ovirt-FbkNK3vQrX/pythonlib/otopi/context.py, line 127, in
 _executeMethod
 method['method']()
 File
 /tmp/ovirt-FbkNK3vQrX/otopi-plugins/ovirt-host-deploy/vdsm/packages.py,
 line 75, in _validation
 'Cannot locate vdsm package, '
 RuntimeError: Cannot locate vdsm package, possible cause is incorrect
 channels
 2014-04-25 10:52:41 ERROR otopi.context context._executeMethod:146 Failed to
 execute stage 'Setup validation': Cannot locate vdsm package, possible cause
 is incorrect channels
 
 I'm attaching the installation log file. Could Anybody gives me some
 indication about this problem?
 
 Thank you very much in avanced,

Please setup yum repo of ovirt, or use the ovirt-release package[1] to do so 
for you.

[1] http://resources.ovirt.org/pub/yum-repo/

 
 Juanjo.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Problem deploying host

2014-04-25 Thread Sven Kieske
Make sure you got the ovirt.org repo
on the host installed.

it looks like the installer can't find
vdsm, which is the case if you haven't
ovirt.org repo installed.

HTH

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] qemu-kvm-rhev for el6

2014-04-25 Thread Michal Skrivanek

On Apr 24, 2014, at 06:09 , Paul Jansen vla...@yahoo.com.au wrote:

 hello,
 
 does anyone know if there are an existent bugzilla to track the release 
 of qemu-kvm-rhev rpms under el (like centos)?
 Because I've looked at bugzilla and google docs oVirt Planning  
 Tracking with no luck
 
 best regards
 a
 
 I think this is the best fit that I have found so far: 
 https://bugzilla.redhat.com/show_bug.cgi?id=1009100

yes. It got side-tracked by implementing the detection whether it's there or 
not, that is now implemented.
we still want to track building it ourselves…I'm not sure how far we 
got….Sandro?

 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage domain - defining fields

2014-04-25 Thread Gabi C
Hello!

My setup 1 engine Fedora 19 ovirt 3.4 managing 3 nodes Fedora 19 + ovirt
3.4 updated; those nodes acts also as gluster server for 2 storage domains,
each domain 3 bricks replicated  ( so 3 bricks each).

I have one questions regarding the fields one have to complete when
defining a new storage domain and its importance.

First, field *Use Host* - you have to complete with one of the nodes. Of
course you can use any of the host in cluster, *but which is its
significance/importance*? * If you reboot that node, you suppose to have
any issue with the storage?*

The same question with field *Path* - gluster case - *Path to gluster
volume to mount* formatted as HOST:VOLUME , where host can be the same
as above mentioned field-  *Use Host* or any af the gluster cluster
member; *which is its significance/importance?*


*Again If you reboot that node, you suppose to have any issue with the
storage?*
My experience with rebooting these nodes*, *after moving VM's and putting
it in Maintenance was good*,* until yesterday, when after such reboot I've
got some corrupted VM images, and subsequently  I had to restore them.

Error message follows :

Exit message: cannot read header
'/rhev/data-center/mnt/glusterSD/10.125.1.194:gluster__data3/5fe658b1-43ff-4bfd-97d6-28217cf415df/images/321ad473-3741-4825-8ac0-6c416aa8f490/1a21454d-9931-4ed8-ba38-7126528664a1':
Input/output error.


Did I miss some step(s), beside moving VM's and puting node in maintenace,
before reboot one host node, servig also as gluster node?

Any hint, link would be appreciated!

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine datastore crashed on adding a new datastore

2014-04-25 Thread Klas Mattsson

Hello all,

For some reason, when I try to add a second datastore (separate IP and 
empty share with 36:36 user rights), the share which has got the 
hosted-engine on it crashes.


This of course crashes the hosted engine.

When the share is up again, it's not sufficient to create a directory 
/rhev/data-center/mnt/ with identical name as before and mounting the 
share there.

The hosted-engine still refuses to boot up.

The load (pure I/O) on the NFS server goes up to about 11 as well.

Any ideas on where I should check?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted engine datastore crashed on adding a new datastore

2014-04-25 Thread Jiri Moskovcak

On 04/25/2014 11:43 AM, Klas Mattsson wrote:

Hello all,

For some reason, when I try to add a second datastore (separate IP and
empty share with 36:36 user rights), the share which has got the
hosted-engine on it crashes.

This of course crashes the hosted engine.

When the share is up again, it's not sufficient to create a directory
/rhev/data-center/mnt/ with identical name as before and mounting the
share there.
The hosted-engine still refuses to boot up.

The load (pure I/O) on the NFS server goes up to about 11 as well.

Any ideas on where I should check?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Klas,
can you please attach the engine logs located in /var/log/ovirt-engine/ 
? And what exactly crashes? Just the engine or the vm running the 
engine? If the whole vm with the engine then please attach the logs from 
the host /var/log/vdsm and /var/log/libvirt.


Thank you,
Jirka
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine health check issues

2014-04-25 Thread Martin Sivak
Hi Kevin,

can you please tell us what version of hosted-engine are you running?

rpm -q ovirt-hosted-engine-ha

Also, do I understand it correctly that the engine VM is running, but you see 
bad status when you execute the hosted-engine --vm-status command?

If that is so, can you give us current logs from 
/var/log/ovirt-hosted-engine-ha?

--
Martin Sivák
msi...@redhat.com
Red Hat Czech
RHEV-M SLA / Brno, CZ

- Original Message -
 Ok i mount manualy the domain for hosted engine and agent go up.
 
 But vm-status :
 
 --== Host 2 status ==--
 
 Status up-to-date  : False
 Hostname   : 192.168.99.103
 Host ID: 2
 Engine status  : unknown stale-data
 Score  : 0
 Local maintenance  : False
 Host timestamp : 1398333438
 
 And in my engine, host02 Ha is no active.
 
 
 2014-04-24 12:48 GMT+02:00 Kevin Tibi kevint...@hotmail.com:
 
  Hi,
 
  I try to reboot my hosts and now [supervdsmServer] is defunct.
 
  /var/log/vdsm/supervdsm.log
 
 
  MainProcess|Thread-120::DEBUG::2014-04-24
  12:22:19,955::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
  return validateAccess with None
  MainProcess|Thread-120::DEBUG::2014-04-24
  12:22:20,010::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
  validateAccess with ('qemu', ('qemu', 'kvm'),
  '/rhev/data-center/mnt/host01.ovirt.lan:_home_export', 5) {}
  MainProcess|Thread-120::DEBUG::2014-04-24
  12:22:20,014::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
  return validateAccess with None
  MainProcess|Thread-120::DEBUG::2014-04-24
  12:22:20,059::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) call
  validateAccess with ('qemu', ('qemu', 'kvm'),
  '/rhev/data-center/mnt/host01.ovirt.lan:_home_iso', 5) {}
  MainProcess|Thread-120::DEBUG::2014-04-24
  12:22:20,063::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)
  return validateAccess with None
 
  and one host don't mount the NFS used for hosted engine.
 
  MainThread::CRITICAL::2014-04-24
  12:36:16,603::agent::103::ovirt_hosted_engine_ha.agent.agent.Agent::(run)
  Could not start ha-agent
  Traceback (most recent call last):
File
  /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py,
  line 97, in run
  self._run_agent()
File
  /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py,
  line 154, in _run_agent
  hosted_engine.HostedEngine(self.shutdown_requested).start_monitoring()
File
  /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py,
  line 299, in start_monitoring
  self._initialize_vdsm()
File
  /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py,
  line 418, in _initialize_vdsm
  self._sd_path = env_path.get_domain_path(self._config)
File
  /usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py, line
  40, in get_domain_path
  .format(sd_uuid, parent))
  Exception: path to storage domain aea040f8-ab9d-435b-9ecf-ddd4272e592f not
  found in /rhev/data-center/mnt
 
 
 
  2014-04-23 17:40 GMT+02:00 Kevin Tibi kevint...@hotmail.com:
 
  top
  1729 vdsm  20   0 000 Z 373.8  0.0 252:08.51
  ovirt-ha-broker defunct
 
 
  [root@host01 ~]# ps axwu | grep 1729
  vdsm  1729  0.7  0.0  0 0 ?Zl   Apr02 240:24
  [ovirt-ha-broker] defunct
 
  [root@host01 ~]# ll
  /rhev/data-center/mnt/host01.ovirt.lan\:_home_NFS01/aea040f8-ab9d-435b-9ecf-ddd4272e592f/ha_agent/
  total 2028
  -rw-rw. 1 vdsm kvm 1048576 23 avril 17:35 hosted-engine.lockspace
  -rw-rw. 1 vdsm kvm 1028096 23 avril 17:35 hosted-engine.metadata
 
  cat /var/log/vdsm/vdsm.log
 
  Thread-120518::DEBUG::2014-04-23
  17:38:02,299::task::1185::TaskManager.Task::(prepare)
  Task=`f13e71f1-ac7c-49ab-8079-8f099ebf72b6`::finished:
  {'aea040f8-ab9d-435b-9ecf-ddd4272e592f': {'code': 0, 'version': 3,
  'acquired': True, 'delay': '0.000410963', 'lastCheck': '3.4', 'valid':
  True}, '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f': {'code': 0, 'version': 3,
  'acquired': True, 'delay': '0.000412357', 'lastCheck': '6.8', 'valid':
  True}, 'cc51143e-8ad7-4b0b-a4d2-9024dffc1188': {'code': 0, 'version': 0,
  'acquired': True, 'delay': '0.000455292', 'lastCheck': '1.2', 'valid':
  True}, 'ff98d346-4515-4349-8437-fb2f5e9eaadf': {'code': 0, 'version': 0,
  'acquired': True, 'delay': '0.00817113', 'lastCheck': '1.7', 'valid':
  True}}
  Thread-120518::DEBUG::2014-04-23
  17:38:02,300::task::595::TaskManager.Task::(_updateState)
  Task=`f13e71f1-ac7c-49ab-8079-8f099ebf72b6`::moving from state preparing
  -
  state finished
  Thread-120518::DEBUG::2014-04-23
  17:38:02,300::resourceManager::940::ResourceManager.Owner::(releaseAll)
  Owner.releaseAll requests {} resources {}
  Thread-120518::DEBUG::2014-04-23
  17:38:02,300::resourceManager::977::ResourceManager.Owner::(cancelAll)
  Owner.cancelAll 

Re: [ovirt-users] Hosted engine datastore crashed on adding a new datastore

2014-04-25 Thread Klas Mattsson

Hello,

The logs: http://hekla.nu/ovirt/ (seemed a bit rude with 1.5 MB in a mail).

Secondly, a couple of developments.

First of all, the entire NFS store crashed, no other machine could 
access it.

The error might very well be on that server (not ovirt related).
This did mean that the hosted-engine went down, the entire VM.
After the NFS store came up again, it was possible to start it from the 
host without any issues and the storage was added.

I could also add another datastore without a hitch after that.

The same thing happened after uploading a windows 2012 iso, the upload 
went fine (and that datastore was fine) but the other datastore crashed 
afterwards.


I later uploaded a virtio iso and that went fine.

Thank you for any assistance.

With my best regards
Klas Mattsson

On 2014-04-25 12:48, Jiri Moskovcak wrote:

On 04/25/2014 11:43 AM, Klas Mattsson wrote:

Hello all,

For some reason, when I try to add a second datastore (separate IP and
empty share with 36:36 user rights), the share which has got the
hosted-engine on it crashes.

This of course crashes the hosted engine.

When the share is up again, it's not sufficient to create a directory
/rhev/data-center/mnt/ with identical name as before and mounting the
share there.
The hosted-engine still refuses to boot up.

The load (pure I/O) on the NFS server goes up to about 11 as well.

Any ideas on where I should check?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi Klas,
can you please attach the engine logs located in 
/var/log/ovirt-engine/ ? And what exactly crashes? Just the engine or 
the vm running the engine? If the whole vm with the engine then please 
attach the logs from the host /var/log/vdsm and /var/log/libvirt.


Thank you,
Jirka


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.4 - Hosted Engine: Cluster Reboot procedure

2014-04-25 Thread Daniel Helgenberger
Hello ovirt-users,

after playing around with my ovirt 3.4 hosted engine two node HA cluster
I have devised a procedure on how to restart the whole cluster after a
power loss / normal shutdown. This assumes all HA-Nodes have been taken
offline. This also applies partly to rebooted HA nodes.

Please feel free do ask questions and/or comment on improvements. Most
of the things should be obsoleted by future updates anyway.

Note 1:
The problem IMHO seems to be the non connected nfs storage domain,
resulting in the HA-Agent crash / hang. The ha-broker service should be
up and running all the time. Please check this.

Note 2: 
My setup consists of two nodes; 'all nodes' means the task has to be
performed on every node HA node in the cluster.

Node 3:
By 'Login' I mean SSH or local access.


Part A: SHUTDOWN THE CLUSTER
Prerequisite: oVirt HE cluster running, should be taken offline for
maintenance:
 1. In oVirt, shutdown all VM's except HostedEngine.
 2. Login to one cluster node and run 'hosted-engine
--set-maintenance --mode=global' to put the cluster into global
maintenance
 3. Login to ovirt engine VM and shut it down with 'shutdown -h now'
 4. Login to one cluster node and run 'hosted-engine --vm-status' to
check if the engine is really down. 
 5. Shutdown all HA nodes subsequently.


Part B: STARTING THE CLUSTER
Prerequisite: oVirt HE cluster down, NFS storage server running and
exporting the vdsm share.
 1. Start all nodes and wait for them to boot up.
 2. Login to one cluster node. Check the status of the following
services: vdsm, ovirt-ha-agent, ovirt-ha-broker. The status
should be all are running except ovirt-ha-agent is in 'locked'
state and down.
 3. Check 'hosted-engine --vm-status', this should result in a
python stack trace (crash).
 4. On all cluster nodes, connect the storage pool: 'hosted-engine
--connect-storage'. Now, 'hosted-engine --vm-status' runs and
reports 'up to date: False' and 'unknown-stale-data' for all
nodes.
 5. On all cluster nodes, start the 'ovirt-ha-agent' service:
'service ovirt-ha-agent start'
 6. Wait a few minutes for the ha-broker and the agent to collect
the cluster state.
 7. Login to one cluster node. Check 'hosted-engine --vm-status'
until you have cluster nodes 'status-up-to-date: True' and
'score: 2400'
 8. If the cluster was shutdown by yourself and in global
maintenance, remove the maintenance mode with 'hosted-engine
--set-maintenance --mode=none'. Now, the system should do a FSM
reinitialize and start the HostedEngine by itself.¹ If it was
not in maintenance (eg. power fail) the engine should be started
as soon as one host gets a score of 2400.


Part C: STARTING A SINGLE NODE
Prerequisite: oVirt HE cluster up, HostedEngine running. One ha node was
taken offline by local maintenance in oVirt and rebooted.
 1. Follow steps 1-5 of Part B
 2. In oVirt, navigate to Cluster, Hosts and activate the node
previously in maintenance.

---
1 I observed the following things:
  * If you use the command 'hosted-engine --vm-shutdown' instead of
loging in to the ovirt HE and do a local shutdown, the Default
Data Center is set to non - responsive and being Contented after
the reboot. I highly suspect an unclean shutdown by running the
command. Further, it waits about two min. with the shutdown.
  * If you use the command 'hosted-engine --vm-start' on a cluster
in global maintenance, wait for successful start ({'health':
'good', 'vm': 'up', 'detail': 'up'}) and remove the maintenance
status, the engine gets restarted once. By removing the
maintenance first and letting ha-agent do the work, the engine
is not restarted.


Cheers,
Daniel
-- 

Daniel Helgenberger 
m box bewegtbild GmbH 

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19 
D-10115 BERLIN 


www.m-box.de  www.monkeymen.tv 

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767 


smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-25 Thread Daniel Helgenberger
Hello,

does anyone have an idea on how to accomplish this? In my particular
setup, I need a FC tape drive passed though to the vm.
Note, passing throuh FC - LUNs works flawlessly. 

If I understood Virtio -SCSI correctly, this should be possible from
libvirt's part.


Thanks!


-- 

Daniel Helgenberger 
m box bewegtbild GmbH 

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19 
D-10115 BERLIN 


www.m-box.de  www.monkeymen.tv 

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767 


smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-25 Thread Itamar Heim

On 04/25/2014 04:40 PM, Daniel Helgenberger wrote:

Hello,

does anyone have an idea on how to accomplish this? In my particular
setup, I need a FC tape drive passed though to the vm.
Note, passing throuh FC - LUNs works flawlessly.

If I understood Virtio -SCSI correctly, this should be possible from
libvirt's part.


Thanks!




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



what exactly is the issue?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-25 Thread Sven Kieske


Am 25.04.2014 15:40, schrieb Daniel Helgenberger:
 Hello,
 
 does anyone have an idea on how to accomplish this? In my particular
 setup, I need a FC tape drive passed though to the vm.
 Note, passing throuh FC - LUNs works flawlessly. 
 
 If I understood Virtio -SCSI correctly, this should be possible from
 libvirt's part.

I don't know how much of this is implemented, so a dev
must step in, but it is designed at least, maybe it already works.

take a look yourself at:

http://www.ovirt.org/Features/Virtio-SCSI#Adding_a_DirectLUN_Disk_.28lun_passthrough.29

HTH

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] qemu-kvm-rhev for el6

2014-04-25 Thread Itamar Heim

On 04/25/2014 12:26 PM, Michal Skrivanek wrote:


On Apr 24, 2014, at 06:09 , Paul Jansen vla...@yahoo.com.au wrote:


hello,

does anyone know if there are an existent bugzilla to track the release
of qemu-kvm-rhev rpms under el (like centos)?
Because I've looked at bugzilla and google docs oVirt Planning 
Tracking with no luck

best regards
a


I think this is the best fit that I have found so far: 
https://bugzilla.redhat.com/show_bug.cgi?id=1009100


yes. It got side-tracked by implementing the detection whether it's there or 
not, that is now implemented.
we still want to track building it ourselves…I'm not sure how far we 
got….Sandro?



its built nightly from centos sources at:
http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/lastSuccessfulBuild/artifact/rpms/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-25 Thread Daniel Helgenberger
Hello Itamar,

sorry - forgot to mention that.

I cannot 'see' the device in oVirt as external LUN. This might because
dm-multipath is ignoring this device for some reason unknown to me; I
only see the dm'ed LUNs.

I did not temper with multipath.conf so far because it seems to be
managed by ovirt...

cheers,



On Fr, 2014-04-25 at 16:42 +0300, Itamar Heim wrote:
 On 04/25/2014 04:40 PM, Daniel Helgenberger wrote:
  Hello,
 
  does anyone have an idea on how to accomplish this? In my particular
  setup, I need a FC tape drive passed though to the vm.
  Note, passing throuh FC - LUNs works flawlessly.
 
  If I understood Virtio -SCSI correctly, this should be possible from
  libvirt's part.
 
 
  Thanks!
 
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 what exactly is the issue?

-- 

Daniel Helgenberger 
m box bewegtbild GmbH 

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19 
D-10115 BERLIN 


www.m-box.de  www.monkeymen.tv 

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767 


smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-25 Thread Jiri Belka
On Fri, 25 Apr 2014 13:40:09 +
Daniel Helgenberger daniel.helgenber...@m-box.de wrote:

 Hello,
 
 does anyone have an idea on how to accomplish this? In my particular
 setup, I need a FC tape drive passed though to the vm.
 Note, passing throuh FC - LUNs works flawlessly. 
 
 If I understood Virtio -SCSI correctly, this should be possible from
 libvirt's part.

I can be wrong but my understanding is that dm-mpio works on block
layer thus it does not support multipath for tapes/cd-devices.

But I could be wrong, I got this info from an OpenBSD paper comparing
SCSI multipath implementation.

j.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Network Security / Seperation

2014-04-25 Thread Darrell Budic
Check out the VDSM hooks, the isolatedprivatevlan hook will probably accomplish 
what you want.

  -Darrell

On Apr 24, 2014, at 2:08 AM, squadra squa...@gmail.com wrote:

 Hi Folks,
 
 i am currently looking for a way to isolate each vms network traffic
 so none can sniff others network traffic. currently i am playing
 around with the neutron integration, which gives me more question
 marks than answers for now (even documentation seems to be incomplete
 / outdated).
 
 Is there any other solution, which does not require to create a new
 vlan for each vm, to make sure that noone can sniff others traffic?
 
 Cheers,
 
 Juergen
 
 -- 
 Sent from the Delta quadrant using Borg technology!
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-25 Thread Daniel Helgenberger

On Fr, 2014-04-25 at 17:19 +0200, Jiri Belka wrote:
 On Fri, 25 Apr 2014 13:40:09 +
 Daniel Helgenberger daniel.helgenber...@m-box.de wrote:
 
  Hello,
  
  does anyone have an idea on how to accomplish this? In my particular
  setup, I need a FC tape drive passed though to the vm.
  Note, passing throuh FC - LUNs works flawlessly. 
  
  If I understood Virtio -SCSI correctly, this should be possible from
  libvirt's part.
 
 I can be wrong but my understanding is that dm-mpio works on block
 layer thus it does not support multipath for tapes/cd-devices.
 
 But I could be wrong, I got this info from an OpenBSD paper comparing
 SCSI multipath implementation.
 
 j.
No, I think so too - it does not support tape drive as such (but I could
set up a tape library as a LUN to use it with LTFS for instance)

The point being is another. VirtIO SCSI should be able to pass through
any scsi device; like tape drives end enclosures. I know this works in
Proxmox: http://pve.proxmox.com/wiki/Tape_Drives

Does it work in oVirt, too? In the GUI I only see block LUNs from DM.

-- 

Daniel Helgenberger 
m box bewegtbild GmbH 

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19 
D-10115 BERLIN 


www.m-box.de  www.monkeymen.tv 

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767 


smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Using Virtio-SCSI passthough on SCSI - Devices

2014-04-25 Thread Jeremiah Jahn
For what it's worth, I'm also in need of this feature.  Some way to
make a scsi tape available to a VM.

On Fri, Apr 25, 2014 at 12:01 PM, Daniel Helgenberger
daniel.helgenber...@m-box.de wrote:

 On Fr, 2014-04-25 at 17:19 +0200, Jiri Belka wrote:
 On Fri, 25 Apr 2014 13:40:09 +
 Daniel Helgenberger daniel.helgenber...@m-box.de wrote:

  Hello,
 
  does anyone have an idea on how to accomplish this? In my particular
  setup, I need a FC tape drive passed though to the vm.
  Note, passing throuh FC - LUNs works flawlessly.
 
  If I understood Virtio -SCSI correctly, this should be possible from
  libvirt's part.

 I can be wrong but my understanding is that dm-mpio works on block
 layer thus it does not support multipath for tapes/cd-devices.

 But I could be wrong, I got this info from an OpenBSD paper comparing
 SCSI multipath implementation.

 j.
 No, I think so too - it does not support tape drive as such (but I could
 set up a tape library as a LUN to use it with LTFS for instance)

 The point being is another. VirtIO SCSI should be able to pass through
 any scsi device; like tape drives end enclosures. I know this works in
 Proxmox: http://pve.proxmox.com/wiki/Tape_Drives

 Does it work in oVirt, too? In the GUI I only see block LUNs from DM.

 --

 Daniel Helgenberger
 m box bewegtbild GmbH

 P: +49/30/2408781-22
 F: +49/30/2408781-10

 ACKERSTR. 19
 D-10115 BERLIN


 www.m-box.de  www.monkeymen.tv

 Geschäftsführer: Martin Retschitzegger / Michaela Göllner
 Handeslregister: Amtsgericht Charlottenburg / HRB 112767

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fw: does SPM can run over ovirt-engine host ?

2014-04-25 Thread Moti Asayag


- Original Message -
 From: Tamer Lima tamer.amer...@gmail.com
 To: Moti Asayag masa...@redhat.com
 Cc: users@ovirt.org
 Sent: Thursday, April 24, 2014 7:51:39 PM
 Subject: Re: [ovirt-users] does SPM can run over ovirt-engine host ?
 
 Hi,
 this is the piece of code  of engine.log at serv-0202 (engine server)
 the spm was defined on serv-0203
 
 
 log from serv-0202 (engine server):
   2014-04-24 13:18:11,746 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call
 Stack: null, Custom Event ID: -1, *Message: Used Network resources of host
 srv-0202 [96%] exceeded defined threshold [95%].*
   2014-04-24 13:18:22,578 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null,
 Custom Event ID: -1, Message:* Used Network resources of host srv-0203
 [98%] exceeded defined threshold [95%].*
 
 
 below is the log before the vm creation procedure. The log starts on the
 moment I press to create a new virtual machine:
 
 (The procedure of creation VM takes more than 1 hour. I executed tcpdump
 command on srv-0203 (SPM), even creating using thinning provisioning , I
 collected 500Gb of traffic between serv-0202 and serv-0203. When finally a
 VM is created there is no real disk allocation from ovirt, only my tcpdump
 log file. I do not know why this traffic exists)
 

Allon, could you advise ?

 
 log from serv-0202 (engine server):
 
 2014-04-24 13:11:36,241 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258,
 Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data
 Center Default. Setting status to Non Responsive.
 2014-04-24 13:11:36,255 INFO
  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds -
 srv-0202, spmStatus Free, storage pool Default
 2014-04-24 13:11:36,258 INFO
  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202,
 storage pool Default, prevId -1, LVER -1
 2014-04-24 13:11:36,259 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] START,
 SpmStartVDSCommand(HostName = srv-0202, HostId =
 fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId =
 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1,
 storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id:
 778a334c
 2014-04-24 13:11:36,310 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started:
 taskId = 198c7765-38cb-42e7-9349-93ca43be7066
 2014-04-24 13:11:37,315 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS
 method
 2014-04-24 13:11:37,316 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended:
 taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished
 2014-04-24 13:11:37,316 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed -
 result: cleanSuccess, message: VDSGenericException: VDSErrorException:
 Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code
 = 358
 2014-04-24 13:11:37,363 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm
 status: Free
 2014-04-24 13:11:37,364 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] START,
 HSMClearTaskVDSCommand(HostName = srv-0202, HostId =
 fbdf0655-6560-4e12-a95a-875592f62cb5,
 taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022
 2014-04-24 13:11:37,409 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH,
 HSMClearTaskVDSCommand, log id: 6e6ad022
 2014-04-24 13:11:37,409 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
 (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand,
 return:
 org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log
 id: 778a334c
 2014-04-24 13:11:37,411 INFO
  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
 (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command:
 SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool
 2014-04-24 13:11:37,416 INFO
  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 

Re: [ovirt-users] oVirt 3.4 - Hosted Engine: Cluster Reboot procedure

2014-04-25 Thread 适兕
HI:
   Thanks.
   great work .

   Why not update this to wiki page?




2014-04-25 21:08 GMT+08:00 Daniel Helgenberger daniel.helgenber...@m-box.de
:

 Hello ovirt-users,

 after playing around with my ovirt 3.4 hosted engine two node HA cluster
 I have devised a procedure on how to restart the whole cluster after a
 power loss / normal shutdown. This assumes all HA-Nodes have been taken
 offline. This also applies partly to rebooted HA nodes.

 Please feel free do ask questions and/or comment on improvements. Most
 of the things should be obsoleted by future updates anyway.

 Note 1:
 The problem IMHO seems to be the non connected nfs storage domain,
 resulting in the HA-Agent crash / hang. The ha-broker service should be
 up and running all the time. Please check this.

 Note 2:
 My setup consists of two nodes; 'all nodes' means the task has to be
 performed on every node HA node in the cluster.

 Node 3:
 By 'Login' I mean SSH or local access.


 Part A: SHUTDOWN THE CLUSTER
 Prerequisite: oVirt HE cluster running, should be taken offline for
 maintenance:
  1. In oVirt, shutdown all VM's except HostedEngine.
  2. Login to one cluster node and run 'hosted-engine
 --set-maintenance --mode=global' to put the cluster into global
 maintenance
  3. Login to ovirt engine VM and shut it down with 'shutdown -h now'
  4. Login to one cluster node and run 'hosted-engine --vm-status' to
 check if the engine is really down.
  5. Shutdown all HA nodes subsequently.


 Part B: STARTING THE CLUSTER
 Prerequisite: oVirt HE cluster down, NFS storage server running and
 exporting the vdsm share.
  1. Start all nodes and wait for them to boot up.
  2. Login to one cluster node. Check the status of the following
 services: vdsm, ovirt-ha-agent, ovirt-ha-broker. The status
 should be all are running except ovirt-ha-agent is in 'locked'
 state and down.
  3. Check 'hosted-engine --vm-status', this should result in a
 python stack trace (crash).
  4. On all cluster nodes, connect the storage pool: 'hosted-engine
 --connect-storage'. Now, 'hosted-engine --vm-status' runs and
 reports 'up to date: False' and 'unknown-stale-data' for all
 nodes.
  5. On all cluster nodes, start the 'ovirt-ha-agent' service:
 'service ovirt-ha-agent start'
  6. Wait a few minutes for the ha-broker and the agent to collect
 the cluster state.
  7. Login to one cluster node. Check 'hosted-engine --vm-status'
 until you have cluster nodes 'status-up-to-date: True' and
 'score: 2400'
  8. If the cluster was shutdown by yourself and in global
 maintenance, remove the maintenance mode with 'hosted-engine
 --set-maintenance --mode=none'. Now, the system should do a FSM
 reinitialize and start the HostedEngine by itself.¹ If it was
 not in maintenance (eg. power fail) the engine should be started
 as soon as one host gets a score of 2400.


 Part C: STARTING A SINGLE NODE
 Prerequisite: oVirt HE cluster up, HostedEngine running. One ha node was
 taken offline by local maintenance in oVirt and rebooted.
  1. Follow steps 1-5 of Part B
  2. In oVirt, navigate to Cluster, Hosts and activate the node
 previously in maintenance.

 ---
 1 I observed the following things:
   * If you use the command 'hosted-engine --vm-shutdown' instead of
 loging in to the ovirt HE and do a local shutdown, the Default
 Data Center is set to non - responsive and being Contented after
 the reboot. I highly suspect an unclean shutdown by running the
 command. Further, it waits about two min. with the shutdown.
   * If you use the command 'hosted-engine --vm-start' on a cluster
 in global maintenance, wait for successful start ({'health':
 'good', 'vm': 'up', 'detail': 'up'}) and remove the maintenance
 status, the engine gets restarted once. By removing the
 maintenance first and letting ha-agent do the work, the engine
 is not restarted.


 Cheers,
 Daniel
 --

 Daniel Helgenberger
 m box bewegtbild GmbH

 P: +49/30/2408781-22
 F: +49/30/2408781-10

 ACKERSTR. 19
 D-10115 BERLIN


 www.m-box.de  www.monkeymen.tv

 Geschäftsführer: Martin Retschitzegger / Michaela Göllner
 Handeslregister: Amtsgericht Charlottenburg / HRB 112767

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




-- 
独立之思想,自由之精神。
--陈寅恪
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-25 Thread Steve Dainard
Restarting vdsm and hosts didn't do anything helpful.

I was able to clone from latest snapshot, then live snapshot the new cloned
VM.

After upgrading engine to 3.4 and upgrading my hosts, I can now live
snapshot this VM.


*Steve *


On Thu, Apr 24, 2014 at 1:48 AM, Itamar Heim ih...@redhat.com wrote:

 On 04/23/2014 09:57 PM, R P Herrold wrote:

 On Wed, 23 Apr 2014, Steve Dainard wrote:

  I have other VM's with the same amount of snapshots without this problem.
 No conclusion jumping going on. More interested in what the best practice
 is for VM's that accumulate snapshots over time.


 For some real world context, we seem to accumulate snapshots
 using our local approach, and are not that focused on, or
 attentive about removing them.  The 'highwater mark' of 39, on
 a machine that has been around since it was provisioned:
 2010-01-05

 [root@xxx backups]# ./count-snapshots.sh | sort -n | tail -3
 38 vm_64099
 38 vm_98036
 39 vm_06359

 Accumulating large numbers of snapshots seems more the
 function of pets, than ephemeral 'cattle'

 I wrote the first paragraph without looking up the 'owners' of
 the images. As I dereference the VM id's, all of the top ten
 in that list turn out to be mailservers, radius servers, name
 servers, and such, where the business unit owners chose not
 (or neglect) to 'winnow' their herd.  There are no ephemeral
 use units in the top ten

 -- Russ herrold
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 please note there is a recommended limit of having no more than 500
 snapshots per block storage domain due to some LVM performance issues with
 high number of LVs. each disk/snapshot is an LV.
 NFS doesn't have this limitation.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users