- Original Message -
| From: Douglas Schilling Landgraf dougsl...@redhat.com
| To: users@ovirt.org
| Sent: Wednesday, July 10, 2013 2:32:44 AM
| Subject: [Users] FISL14 Conference Report - oVirt and Cloud EcoSystem
|
| http://dougsland.livejournal.com/122744.html
|
| See you there next
Hi,
we are developing an application where would be great if multiple host
could have access to the same disk. I think that we can use features
like shared disk or direct LUN to attach the same storage to multiple
VM's. However to provide concurrent access to the resource, there should
be a
Why not use gluster with xfs on the storage bricks?
http://www.gluster.org/
On Wed, Jul 10, 2013 at 7:15 AM, Piotr Szubiakowski
piotr.szubiakow...@nask.pl wrote:
Hi,
we are developing an application where would be great if multiple host could
have access to the same disk. I think that we can
Hi Piotr!
I've used OCFS2 out of oVirt, so I can't tell you specifically about VM
environment, but I suggest you use OCFS2 in place of GFS2. It is simpler
to implement, so less components to configure and it care about fencing
for you.
On 07/10/2013 08:15 AM, Piotr Szubiakowski wrote:
Hi,
Hi,
gluster is good in scenario when we have many hosts with own storage and
we aggregate these pieces it into one shared storage. In this situation
data is transferred via Ethernet ore Infiniband. In our scenario we have
centralized storage accedes via Fibre Channel. In this situation it
Hi Eduardo,
yes fencing method used in OCFS2 is probably better for vitalized
environments. Thanks for advice!
Many thanks,
Piotr
W dniu 10.07.2013 13:32, Eduardo Ramos pisze:
Hi Piotr!
I've used OCFS2 out of oVirt, so I can't tell you specifically about
VM environment, but I suggest you
On 07/10/2013 07:53 AM, Piotr Szubiakowski wrote:
Hi,
gluster is good in scenario when we have many hosts with own storage and we
aggregate these pieces it into one shared storage. In this situation data is
transferred via Ethernet ore Infiniband. In our scenario we have centralized
storage
On 07/10/2013 07:53 AM, Piotr Szubiakowski wrote:
Hi,
gluster is good in scenario when we have many hosts with own storage and we
aggregate these pieces it into one shared storage. In this situation data is
transferred via Ethernet ore Infiniband. In our scenario we have centralized
storage
Minutes:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-10-14.00.html
Minutes (text):
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-10-14.00.txt
Log:
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-10-14.00.log.html
#ovirt: oVirt Weekly Meeting
Hi Karli
'Restore' certificates basically means taking the backup of
/etc/pki/ovirt-engine/certs and /keys and restoring them into 3.2 after
installation.
--dont-drop-database will do exactly that - leave DB intact; that can be for
your benefit in some cases.
I'll be happy to hear on your
Hi,
I started to learn the oVirt by following the oVirt Quick Start Guide.
I used one server as the ovirt.engine, and another server as the ovirt.host.
Both servers run Fedora 18.
After executing the command engine-setup in the ovirt.engine, I can get into
the admin portal to start the
On 07/10/2013 05:33 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart.
There is separate logical volume for each virtual disk. But I think that
logical volume at the same time could be touched only by one host. Is
it possible that two host
On 07/10/2013 07:19 PM, Zhang, Hongyi wrote:
Hi,
I started to learn the oVirt by following the oVirt Quick Start Guide.
I used one server as the ovirt.engine, and another server as the
ovirt.host. Both servers run Fedora 18.
After executing the command “engine-setup” in the ovirt.engine, I
Hi Itamar,
No any trace was logged in the file /var/log/vdsm/vdsm.log during the creation
of the storage on the ovirt.engine side. But the vdsm is running:
# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service;
New VDSM package is available for testing.
Changes
==
- service: make iscsid a systemd dependency (BZ#981906)
- vdsm.spec: update python-pthreading
Where
===
- oVirt testing update REPO (thanks mburns!)
- For koji users:
f19 -
On 07/10/2013 04:26 AM, Doron Fediuck wrote:
- Original Message -
| From: Douglas Schilling Landgraf dougsl...@redhat.com
| To: users@ovirt.org
| Sent: Wednesday, July 10, 2013 2:32:44 AM
| Subject: [Users] FISL14 Conference Report - oVirt and Cloud EcoSystem
|
|
16 matches
Mail list logo