Re: [ovirt-users] Multiple Data Storage Domains

2016-11-19 Thread Gary Pedretty
Solved:

Changing the second storage domain to a glusterfs Distributed Replicate with 
sharding turned on, works great.   Thanks for the solution.


Gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













> On Nov 7, 2016, at 2:10 AM, Sahina Bose  wrote:
> 
> 
> 
> On Mon, Nov 7, 2016 at 3:27 PM, Gary Pedretty  > wrote:
> [root@fai-kvm-1-gfs admin]# gluster volume status data2
> Status of volume: data2
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick fai-kvm-1-vmn.ravnalaska.net 
> :/kvm2/gl
> uster/data2/brick   49156 0  Y   3484
> Brick fai-kvm-2-vmn.ravnalaska.net 
> :/kvm2/gl
> uster/data2/brick   49156 0  Y   34791
> Brick fai-kvm-3-vmn.ravnalaska.net 
> :/kvm2/gl
> uster/data2/brick   49156 0  Y   
> 177340
> Brick fai-kvm-4-vmn.ravnalaska.net 
> :/kvm2/gl
> uster/data2/brick   49152 0  Y   
> 146038
> NFS Server on localhost 2049  0  Y   40844
> Self-heal Daemon on localhost   N/A   N/AY   40865
> NFS Server on fai-kvm-2-gfs.ravnalaska.net 
>   2049  0  Y   99905
> Self-heal Daemon on fai-kvm-2-gfs.ravnalask
> a.net    N/A   N/A 
>Y   99915
> NFS Server on fai-kvm-4-gfs.ravnalaska.net 
>   2049  0  Y   176305
> Self-heal Daemon on fai-kvm-4-gfs.ravnalask
> a.net    N/A   N/A 
>Y   176326
> NFS Server on fai-kvm-3-gfs.ravnalaska.net 
>   2049  0  Y   226271
> Self-heal Daemon on fai-kvm-3-gfs.ravnalask
> a.net    N/A   N/A 
>Y   226287
> 
> Task Status of Volume data2
> --
> There are no active volume tasks
> 
> 
> [root@fai-kvm-1-gfs admin]# gluster volume info data2
> 
> Volume Name: data2
> Type: Striped-Replicate
> Volume ID: 20f85c9a-541b-4df4-9dba-44c5179bbfb0
> Status: Started
> Number of Bricks: 1 x 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: fai-kvm-1-vmn.ravnalaska.net 
> :/kvm2/gluster/data2/brick
> Brick2: fai-kvm-2-vmn.ravnalaska.net 
> :/kvm2/gluster/data2/brick
> Brick3: fai-kvm-3-vmn.ravnalaska.net 
> :/kvm2/gluster/data2/brick
> Brick4: fai-kvm-4-vmn.ravnalaska.net 
> :/kvm2/gluster/data2/brick
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io -cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> 
> 
> See attached file for the mount log.
> 
> 
> Striped-Replicate is no longer supported in GlusterFS upstream. Instead, you 
> should be using a Distribute-Replicate with sharding enabled. Also when using 
> a gluster volume as storage domain, it is recommended to use replica 3.
> 
> From the mount logs, there is no indication as to why the volume is unmounted 
> frequently. Could you try again with a replica 3 volume that has sharding 
> enabled?
>  
> 
> Gary
> 
> 
> 
> Gary Pedrettyg...@ravnalaska.net 
> 
> Systems Manager  www.flyravn.com 
> 
> Ravn Alaska   /\

Re: [ovirt-users] Multiple Data Storage Domains

2016-11-07 Thread Sahina Bose
On Mon, Nov 7, 2016 at 3:27 PM, Gary Pedretty  wrote:

> [root@fai-kvm-1-gfs admin]# gluster volume status data2
> Status of volume: data2
> Gluster process TCP Port  RDMA Port  Online
>  Pid
> 
> --
> Brick fai-kvm-1-vmn.ravnalaska.net:/kvm2/gl
> uster/data2/brick   49156 0  Y
> 3484
> Brick fai-kvm-2-vmn.ravnalaska.net:/kvm2/gl
> uster/data2/brick   49156 0  Y
> 34791
> Brick fai-kvm-3-vmn.ravnalaska.net:/kvm2/gl
> uster/data2/brick   49156 0  Y
> 177340
> Brick fai-kvm-4-vmn.ravnalaska.net:/kvm2/gl
> uster/data2/brick   49152 0  Y
> 146038
> NFS Server on localhost 2049  0  Y
> 40844
> Self-heal Daemon on localhost   N/A   N/AY
> 40865
> NFS Server on fai-kvm-2-gfs.ravnalaska.net  2049  0  Y
> 99905
> Self-heal Daemon on fai-kvm-2-gfs.ravnalask
> a.net   N/A   N/AY
> 99915
> NFS Server on fai-kvm-4-gfs.ravnalaska.net  2049  0  Y
> 176305
> Self-heal Daemon on fai-kvm-4-gfs.ravnalask
> a.net   N/A   N/AY
> 176326
> NFS Server on fai-kvm-3-gfs.ravnalaska.net  2049  0  Y
> 226271
> Self-heal Daemon on fai-kvm-3-gfs.ravnalask
> a.net   N/A   N/AY
> 226287
>
> Task Status of Volume data2
> 
> --
> There are no active volume tasks
>
>
> [root@fai-kvm-1-gfs admin]# gluster volume info data2
>
> Volume Name: data2
> Type: Striped-Replicate
> Volume ID: 20f85c9a-541b-4df4-9dba-44c5179bbfb0
> Status: Started
> Number of Bricks: 1 x 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: fai-kvm-1-vmn.ravnalaska.net:/kvm2/gluster/data2/brick
> Brick2: fai-kvm-2-vmn.ravnalaska.net:/kvm2/gluster/data2/brick
> Brick3: fai-kvm-3-vmn.ravnalaska.net:/kvm2/gluster/data2/brick
> Brick4: fai-kvm-4-vmn.ravnalaska.net:/kvm2/gluster/data2/brick
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
>
>
> See attached file for the mount log.
>


Striped-Replicate is no longer supported in GlusterFS upstream. Instead,
you should be using a Distribute-Replicate with sharding enabled. Also when
using a gluster volume as storage domain, it is recommended to use replica
3.

>From the mount logs, there is no indication as to why the volume is
unmounted frequently. Could you try again with a replica 3 volume that has
sharding enabled?


>
> Gary
>
>
> 
> Gary Pedrettyg...@ravnalaska.net
> 
> Systems Manager  www.flyravn.com
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> 
>
>
>
>
>
>
>
>
>
>
>
>
> On Nov 6, 2016, at 9:50 PM, Sahina Bose  wrote:
>
> However your volume configuration seems suspect -"stripe 2 replica 2". Can
> you provide gluster volume info of your second storage domain gluster
> volume? The mount logs of the volume (under 
> /var/log/glusterfs/rhev-datacenter...log)
> from the host where the volume is being mounted will also help.
>
>
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multiple Data Storage Domains

2016-11-07 Thread Gary Pedretty
[root@fai-kvm-1-gfs admin]# gluster volume status data2Status of volume: data2Gluster process                             TCP Port  RDMA Port  Online  Pid--Brick fai-kvm-1-vmn.ravnalaska.net:/kvm2/gluster/data2/brick                           49156     0          Y       3484Brick fai-kvm-2-vmn.ravnalaska.net:/kvm2/gluster/data2/brick                           49156     0          Y       34791Brick fai-kvm-3-vmn.ravnalaska.net:/kvm2/gluster/data2/brick                           49156     0          Y       177340Brick fai-kvm-4-vmn.ravnalaska.net:/kvm2/gluster/data2/brick                           49152     0          Y       146038NFS Server on localhost                     2049      0          Y       40844Self-heal Daemon on localhost               N/A       N/A        Y       40865NFS Server on fai-kvm-2-gfs.ravnalaska.net  2049      0          Y       99905Self-heal Daemon on fai-kvm-2-gfs.ravnalaska.net                                       N/A       N/A        Y       99915NFS Server on fai-kvm-4-gfs.ravnalaska.net  2049      0          Y       176305Self-heal Daemon on fai-kvm-4-gfs.ravnalaska.net                                       N/A       N/A        Y       176326NFS Server on fai-kvm-3-gfs.ravnalaska.net  2049      0          Y       226271Self-heal Daemon on fai-kvm-3-gfs.ravnalaska.net                                       N/A       N/A        Y       226287Task Status of Volume data2--There are no active volume tasks[root@fai-kvm-1-gfs admin]# gluster volume info data2Volume Name: data2Type: Striped-ReplicateVolume ID: 20f85c9a-541b-4df4-9dba-44c5179bbfb0Status: StartedNumber of Bricks: 1 x 2 x 2 = 4Transport-type: tcpBricks:Brick1: fai-kvm-1-vmn.ravnalaska.net:/kvm2/gluster/data2/brickBrick2: fai-kvm-2-vmn.ravnalaska.net:/kvm2/gluster/data2/brickBrick3: fai-kvm-3-vmn.ravnalaska.net:/kvm2/gluster/data2/brickBrick4: fai-kvm-4-vmn.ravnalaska.net:/kvm2/gluster/data2/brickOptions Reconfigured:performance.readdir-ahead: onperformance.quick-read: offperformance.read-ahead: offperformance.io-cache: offperformance.stat-prefetch: offcluster.eager-lock: enablenetwork.remote-dio: enablecluster.quorum-type: autocluster.server-quorum-type: serverstorage.owner-uid: 36storage.owner-gid: 36See attached file for the mount log.Gary
more rhev-data-center-mnt-glusterSD-glustermount2:data2.log-20161106
[2016-11-04 22:54:32.118587] I [MSGID: 100030] [glusterfsd.c:2338:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.16 
(args: /usr/sbin/glusterfs --volfile-server=glustermou
nt2 --volfile-server=fai-kvm-1-vmn.ravnalaska.net 
--volfile-server=fai-kvm-2-vmn.ravnalaska.net 
--volfile-server=fai-kvm-3-vmn.ravnalaska.net 
--volfile-server=fai-kvm-4-vmn.ravnalaska.net --volfile-id
=data2 /rhev/data-center/mnt/glusterSD/glustermount2:data2)
[2016-11-04 22:54:32.128807] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2016-11-04 22:54:32.138959] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 2
[2016-11-04 22:54:32.139878] I [MSGID: 114020] [client.c:2113:notify] 
0-data2-client-0: parent translators are ready, attempting connect on transport
[2016-11-04 22:54:32.142057] I [MSGID: 114020] [client.c:2113:notify] 
0-data2-client-1: parent translators are ready, attempting connect on transport
[2016-11-04 22:54:32.142331] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 
0-data2-client-0: changing port to 49156 (from 0)
[2016-11-04 22:54:32.143961] I [MSGID: 114020] [client.c:2113:notify] 
0-data2-client-2: parent translators are ready, attempting connect on transport
[2016-11-04 22:54:32.146123] I [MSGID: 114057] 
[client-handshake.c:1437:select_server_supported_programs] 0-data2-client-0: 
Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-11-04 22:54:32.146308] I [rpc-clnt.c:1960:rpc_clnt_reconfig] 
0-data2-client-1: changing port to 49156 (from 0)
[2016-11-04 22:54:32.147618] I [MSGID: 114020] [client.c:2113:notify] 
0-data2-client-3: parent translators are ready, attempting connect on transport
[2016-11-04 22:54:32.149874] I [MSGID: 114046] 
[client-handshake.c:1213:client_setvolume_cbk] 0-data2-client-0: Connected to 
data2-client-0, attached to remote volume '/kvm2/gluster/data2/brick'.
[2016-11-04 22:54:32.149891] I [MSGID: 114047] 
[client-handshake.c:1224:client_setvolume_cbk] 0-data2-client-0: Server and 
Client lk-version numbers are not same, reopening the fds
[2016-11-04 22:54:32.149941] I [MSGID: 108005] [afr-common.c:4299:afr_notify] 
0-data2-replicate-0: Subvolume 'data2-client-0' came back up; going online.
[2016-11-04 22:54:32.15] I [MSGID: 114035] 
[client-handshake.c:193:client_set_lk_version_cbk] 0-data2-client-0: Server lk 
version = 1
[2016-11-04 22:54:32.150080] I [MSGID: 114057] 

Re: [ovirt-users] Multiple Data Storage Domains

2016-11-06 Thread Sahina Bose
On Mon, Nov 7, 2016 at 11:20 AM, Gary Pedretty  wrote:

> As a storage domain, this gluster volume will not work whether it is
> preallocated or thin provision.   It will work as a straight gluster volume
> mounted directly to any VM on the ovirt Cluster, or any physical machine,
> just not as a data storage domain in the Data Center.
>
> Are there restrictions to having more than one data storage domain that
> has it gluster volumes on the same hosts that are also part of the Data
> Center and Cluster?
>

There are no such restrictions.

However your volume configuration seems suspect -"stripe 2 replica 2". Can
you provide gluster volume info of your second storage domain gluster
volume? The mount logs of the volume (under
/var/log/glusterfs/rhev-datacenter...log) from the host where the
volume is being mounted will also help.


>
>
> Gary
>
>
> 
> Gary Pedrettyg...@ravnalaska.net
> 
> Systems Manager  www.flyravn.com
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> 
>
>
>
>
>
>
>
>
>
>
>
>
> On Nov 6, 2016, at 6:28 AM, Maor Lipchuk  wrote:
>
> Hi Gary,
>
> Do you have other disks on this storage domain?
> Have you tried to use other VMs with disks on this storage domain?
> Is this disk is preallocated? If not can you try to create a pre-allocate
> disk and re-try
>
> Regards,
> Maor
>
>
>
> On Sat, Nov 5, 2016 at 2:28 AM, Gary Pedretty  wrote:
>
>> I am having an issue in a Hosted Engine GlusterFS setup.   I have 4 hosts
>> in a cluster, with the Engined being hosted on the Cluster.  This follows
>> the pattern shown in the docs for a glusterized setup, except that I have 4
>> hosts.   I have engine, data, iso and export storage domains all as
>> glusterfs on a replica 3 glusterfs on the first 3 hosts.  These gluster
>> volumes are running on an SSD Hardware Raid 6, which is identical on all
>> the hosts.  All the hosts have a second Raid 6 Array with Physical Hard
>> Drives and I have created a second data storage domain as a glusterfs
>> across all 4 hosts as a stripe 2 replica 2 and have added it to the Data
>> Center.  However if I use this second Storage Domain as the boot disk for a
>> VM, or as second disk for a VM that is already running, the VM will become
>> non-responsive as soon as the VM starts using this disk.   Happens during
>> the OS install if the VM is using this storage domain for its boot disk, or
>> if I try copying anything large to it when it is a second disk for a VM
>> that has its boot drive on the Master Data Storage Domain.
>>
>> If I mount the gluster volume that is this second storage domain on one
>> of the hosts directly or any other machine on my local network, the gluster
>> volume works fine.  It is only when it is used as a storage domain (second
>> data domain) on VMs in the cluster.
>>
>> Once the vm becomes non-responsive it cannot be stopped, removed or
>> destroyed without restarting the host machine that the VM is currently
>> running on.   The 4 hosts are connected via 10gig ethernet, so should not
>> be a network issue.
>>
>>
>> Any ideas?
>>
>> Gary
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multiple Data Storage Domains

2016-11-06 Thread Gary Pedretty
As a storage domain, this gluster volume will not work whether it is 
preallocated or thin provision.   It will work as a straight gluster volume 
mounted directly to any VM on the ovirt Cluster, or any physical machine, just 
not as a data storage domain in the Data Center.

Are there restrictions to having more than one data storage domain that has it 
gluster volumes on the same hosts that are also part of the Data Center and 
Cluster?


Gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













> On Nov 6, 2016, at 6:28 AM, Maor Lipchuk  wrote:
> 
> Hi Gary,
> 
> Do you have other disks on this storage domain?
> Have you tried to use other VMs with disks on this storage domain?
> Is this disk is preallocated? If not can you try to create a pre-allocate 
> disk and re-try
> 
> Regards,
> Maor
> 
> 
> 
> On Sat, Nov 5, 2016 at 2:28 AM, Gary Pedretty  > wrote:
> I am having an issue in a Hosted Engine GlusterFS setup.   I have 4 hosts in 
> a cluster, with the Engined being hosted on the Cluster.  This follows the 
> pattern shown in the docs for a glusterized setup, except that I have 4 
> hosts.   I have engine, data, iso and export storage domains all as glusterfs 
> on a replica 3 glusterfs on the first 3 hosts.  These gluster volumes are 
> running on an SSD Hardware Raid 6, which is identical on all the hosts.  All 
> the hosts have a second Raid 6 Array with Physical Hard Drives and I have 
> created a second data storage domain as a glusterfs across all 4 hosts as a 
> stripe 2 replica 2 and have added it to the Data Center.  However if I use 
> this second Storage Domain as the boot disk for a VM, or as second disk for a 
> VM that is already running, the VM will become non-responsive as soon as the 
> VM starts using this disk.   Happens during the OS install if the VM is using 
> this storage domain for its boot disk, or if I try copying anything large to 
> it when it is a second disk for a VM that has its boot drive on the Master 
> Data Storage Domain. 
> 
> If I mount the gluster volume that is this second storage domain on one of 
> the hosts directly or any other machine on my local network, the gluster 
> volume works fine.  It is only when it is used as a storage domain (second 
> data domain) on VMs in the cluster.
> 
> Once the vm becomes non-responsive it cannot be stopped, removed or destroyed 
> without restarting the host machine that the VM is currently running on.   
> The 4 hosts are connected via 10gig ethernet, so should not be a network 
> issue.
> 
> 
> Any ideas?
> 
> Gary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multiple Data Storage Domains

2016-11-06 Thread Maor Lipchuk
Hi Gary,

Do you have other disks on this storage domain?
Have you tried to use other VMs with disks on this storage domain?
Is this disk is preallocated? If not can you try to create a pre-allocate
disk and re-try

Regards,
Maor



On Sat, Nov 5, 2016 at 2:28 AM, Gary Pedretty  wrote:

> I am having an issue in a Hosted Engine GlusterFS setup.   I have 4 hosts
> in a cluster, with the Engined being hosted on the Cluster.  This follows
> the pattern shown in the docs for a glusterized setup, except that I have 4
> hosts.   I have engine, data, iso and export storage domains all as
> glusterfs on a replica 3 glusterfs on the first 3 hosts.  These gluster
> volumes are running on an SSD Hardware Raid 6, which is identical on all
> the hosts.  All the hosts have a second Raid 6 Array with Physical Hard
> Drives and I have created a second data storage domain as a glusterfs
> across all 4 hosts as a stripe 2 replica 2 and have added it to the Data
> Center.  However if I use this second Storage Domain as the boot disk for a
> VM, or as second disk for a VM that is already running, the VM will become
> non-responsive as soon as the VM starts using this disk.   Happens during
> the OS install if the VM is using this storage domain for its boot disk, or
> if I try copying anything large to it when it is a second disk for a VM
> that has its boot drive on the Master Data Storage Domain.
>
> If I mount the gluster volume that is this second storage domain on one of
> the hosts directly or any other machine on my local network, the gluster
> volume works fine.  It is only when it is used as a storage domain (second
> data domain) on VMs in the cluster.
>
> Once the vm becomes non-responsive it cannot be stopped, removed or
> destroyed without restarting the host machine that the VM is currently
> running on.   The 4 hosts are connected via 10gig ethernet, so should not
> be a network issue.
>
>
> Any ideas?
>
> Gary
>
> 
> Gary Pedrettyg...@ravnalaska.net
> 
> Systems Manager  www.flyravn.com
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> 
>
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Multiple Data Storage Domains

2016-11-04 Thread Gary Pedretty
I am having an issue in a Hosted Engine GlusterFS setup.   I have 4 hosts in a 
cluster, with the Engined being hosted on the Cluster.  This follows the 
pattern shown in the docs for a glusterized setup, except that I have 4 hosts.  
 I have engine, data, iso and export storage domains all as glusterfs on a 
replica 3 glusterfs on the first 3 hosts.  These gluster volumes are running on 
an SSD Hardware Raid 6, which is identical on all the hosts.  All the hosts 
have a second Raid 6 Array with Physical Hard Drives and I have created a 
second data storage domain as a glusterfs across all 4 hosts as a stripe 2 
replica 2 and have added it to the Data Center.  However if I use this second 
Storage Domain as the boot disk for a VM, or as second disk for a VM that is 
already running, the VM will become non-responsive as soon as the VM starts 
using this disk.   Happens during the OS install if the VM is using this 
storage domain for its boot disk, or if I try copying anything large to it when 
it is a second disk for a VM that has its boot drive on the Master Data Storage 
Domain.  

If I mount the gluster volume that is this second storage domain on one of the 
hosts directly or any other machine on my local network, the gluster volume 
works fine.  It is only when it is used as a storage domain (second data 
domain) on VMs in the cluster.

Once the vm becomes non-responsive it cannot be stopped, removed or destroyed 
without restarting the host machine that the VM is currently running on.   The 
4 hosts are connected via 10gig ethernet, so should not be a network issue.


Any ideas?

Gary


Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users