Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-04-25 Thread Sahina Bose
On Tue, Apr 25, 2017 at 9:18 PM, Jim Kusznir  wrote:

> So with arbiter, I actually only have two copies of data...Does arbiter
> have at least checksum or something to detect corruption of a copy? (like
> old RAID-4 disk configuration)?
>

Yes, the arbiter brick stores metadata information about the files to
decide the good copy of data stored on the replicas in case of conflict.


>
> Ok...Related question:  Is there a way to set up an offsite gluster
> storage server to mirror the contents of my main server?  As "fire"
> insurance basically?  (eventually, I'd like to have an "offsite" DR
> cluster, but I don't have the resources or scale yet for that).
>
> What I'd like to do is place a basic storage server somewhere else and
> have it sync any gluster data changes on a regular basis, and be usable to
> repopulate storage should I loose all of my current cluster (eg, a building
> fire or theft).
>

Yes, the geo-replication feature can help with that. There's a remote data
sync feature introduced for gluster storage domains, that helps with this.
You can set this up such that data from your storage domain is regularly
synced to a remote gluster volume, while ensuring data consistency. The
remote gluster volume does not have to a replica 3.


>
> I find gluster has amazing power from what I hear, but I have a hard time
> finding documentation at "the right level" to be useful.  I've found some
> very basic introductory guide, then some very advanced guides that require
> extensive knowledge of gluster already.  Something in the middle to explain
> some of these questions (like arbitrar and migration strategies,
> geo-replication, etc; and how to deploy them) are absent (or at least, i
> haven't found them yet).  I still feel like I'm using something I don't
> understand, and the only avenue I have to learn more is to ask questions
> here, as the docs aren't at an accessible level.
>

Thanks for the feedback. Are you looking at documentation on a use-case
basis?


>
> Thanks!
> --Jim
>
> On Mon, Apr 3, 2017 at 10:34 PM, Sahina Bose  wrote:
>
>>
>>
>> On Sat, Apr 1, 2017 at 10:32 PM, Jim Kusznir  wrote:
>>
>>> Thank you!
>>>
>>> Here's the output of gluster volume info:
>>> [root@ovirt1 ~]# gluster volume info
>>>
>>> Volume Name: data
>>> Type: Replicate
>>> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
>>> Status: Started
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
>>> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
>>> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
>>> Options Reconfigured:
>>> performance.strict-o-direct: on
>>> nfs.disable: on
>>> user.cifs: off
>>> network.ping-timeout: 30
>>> cluster.shd-max-threads: 6
>>> cluster.shd-wait-qlength: 1
>>> cluster.locking-scheme: granular
>>> cluster.data-self-heal-algorithm: full
>>> performance.low-prio-threads: 32
>>> features.shard-block-size: 512MB
>>> features.shard: on
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> network.remote-dio: enable
>>> cluster.eager-lock: enable
>>> performance.stat-prefetch: off
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> performance.readdir-ahead: on
>>> server.allow-insecure: on
>>>
>>> Volume Name: engine
>>> Type: Replicate
>>> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
>>> Status: Started
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
>>> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
>>> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
>>> Options Reconfigured:
>>> performance.readdir-ahead: on
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> cluster.eager-lock: enable
>>> network.remote-dio: off
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> features.shard: on
>>> features.shard-block-size: 512MB
>>> performance.low-prio-threads: 32
>>> cluster.data-self-heal-algorithm: full
>>> cluster.locking-scheme: granular
>>> cluster.shd-wait-qlength: 1
>>> cluster.shd-max-threads: 6
>>> network.ping-timeout: 30
>>> user.cifs: off
>>> nfs.disable: on
>>> performance.strict-o-direct: on
>>>
>>> Volume Name: export
>>> Type: Replicate
>>> Volume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4
>>> Status: Stopped
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt1.nwfiber.com:/gluster/brick3/export
>>> Brick2: ovirt2.nwfiber.com:/gluster/brick3/export
>>> Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter)
>>> Options Reconfigured:
>>> performance.readdir-ahead: on
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> 

Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-04-03 Thread Sahina Bose
On Sat, Apr 1, 2017 at 10:32 PM, Jim Kusznir  wrote:

> Thank you!
>
> Here's the output of gluster volume info:
> [root@ovirt1 ~]# gluster volume info
>
> Volume Name: data
> Type: Replicate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> performance.strict-o-direct: on
> nfs.disable: on
> user.cifs: off
> network.ping-timeout: 30
> cluster.shd-max-threads: 6
> cluster.shd-wait-qlength: 1
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> performance.low-prio-threads: 32
> features.shard-block-size: 512MB
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
> server.allow-insecure: on
>
> Volume Name: engine
> Type: Replicate
> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: export
> Type: Replicate
> Volume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4
> Status: Stopped
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick3/export
> Brick2: ovirt2.nwfiber.com:/gluster/brick3/export
> Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: iso
> Type: Replicate
> Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
> Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
> Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
>
> The node marked as (arbiter) on all of the bricks is the node that is not
> using any of its disk space.
>

This is by design - the arbiter brick only stores metadata and hence saves
on storage.


>
> The engine domain is the volume dedicated for storing the hosted engine.
> Here's some LVM info:
>
>   --- Logical volume ---
>   LV Path/dev/gluster/engine
>   LV Nameengine
>   VG Namegluster
>   LV UUID4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0HD8-esm3wg
>   LV Write Accessread/write
>   LV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:00 -0800
>   LV Status  available
>   # open 1
>   LV Size25.00 GiB
>   Current LE   

Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-04-01 Thread Jim Kusznir
Based on the suggestions here, I did successfully remove the unused export
gluster brick and allocate all otherwise unassigned space to my data
export, then used xfs_growfs to realize the new size.  This should hold me
for a while longer before building  a "proper" storage solution.

--Jim

On Sat, Apr 1, 2017 at 10:02 AM, Jim Kusznir  wrote:

> Thank you!
>
> Here's the output of gluster volume info:
> [root@ovirt1 ~]# gluster volume info
>
> Volume Name: data
> Type: Replicate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> performance.strict-o-direct: on
> nfs.disable: on
> user.cifs: off
> network.ping-timeout: 30
> cluster.shd-max-threads: 6
> cluster.shd-wait-qlength: 1
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> performance.low-prio-threads: 32
> features.shard-block-size: 512MB
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
> server.allow-insecure: on
>
> Volume Name: engine
> Type: Replicate
> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: export
> Type: Replicate
> Volume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4
> Status: Stopped
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick3/export
> Brick2: ovirt2.nwfiber.com:/gluster/brick3/export
> Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: iso
> Type: Replicate
> Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
> Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
> Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
>
> The node marked as (arbiter) on all of the bricks is the node that is not
> using any of its disk space.
>
> The engine domain is the volume dedicated for storing the hosted engine.
> Here's some LVM info:
>
>   --- Logical volume ---
>   LV Path/dev/gluster/engine
>   LV Nameengine
>   VG Namegluster
>   LV UUID4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0HD8-esm3wg
>   LV Write Accessread/write

Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-04-01 Thread Jim Kusznir
Thank you!

Here's the output of gluster volume info:
[root@ovirt1 ~]# gluster volume info

Volume Name: data
Type: Replicate
Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
Options Reconfigured:
performance.strict-o-direct: on
nfs.disable: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 6
cluster.shd-wait-qlength: 1
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
server.allow-insecure: on

Volume Name: engine
Type: Replicate
Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on

Volume Name: export
Type: Replicate
Volume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4
Status: Stopped
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick3/export
Brick2: ovirt2.nwfiber.com:/gluster/brick3/export
Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on

Volume Name: iso
Type: Replicate
Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on


The node marked as (arbiter) on all of the bricks is the node that is not
using any of its disk space.

The engine domain is the volume dedicated for storing the hosted engine.
Here's some LVM info:

  --- Logical volume ---
  LV Path/dev/gluster/engine
  LV Nameengine
  VG Namegluster
  LV UUID4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0HD8-esm3wg
  LV Write Accessread/write
  LV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:00 -0800
  LV Status  available
  # open 1
  LV Size25.00 GiB
  Current LE 6400
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:2

  --- Logical volume ---
  LV Namelvthinpool
  VG Namegluster
  LV UUIDaaNtso-fN1T-ZAkY-kUF2-dlxf-0ap2-JAwSid
  LV Write Accessread/write
  LV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:09 -0800
  LV Pool metadata   lvthinpool_tmeta
  LV Pool 

Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-03-30 Thread Alexander Wels
On Thursday, March 30, 2017 3:53:47 AM EDT Liron Aravot wrote:
> Hi Jim, please see inline
> 
> On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir  wrote:
> > hello:
> > 
> > I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a
> > while now, and am now revisiting some aspects of it for ensuring that I
> > have good reliability.
> > 
> > My cluster is a 3 node cluster, with gluster nodes running on each node.
> > After running my cluster a bit, I'm realizing I didn't do a very optimal
> > job of allocating the space on my disk to the different gluster mount
> > points.  Fortunately, they were created with LVM, so I'm hoping that I can
> > resize them without much trouble.
> > 
> > I have a domain for iso, domain for export, and domain for storage, all
> > thin provisioned; then a domain for the engine, not thin provisioned.  I'd
> > like to expand the storage domain, and possibly shrink the engine domain
> > and make that space also available to the main storage domain.  Is it as
> > simple as expanding the LVM partition, or are there more steps involved?
> > Do I need to take the node offline?
> 
> I didn't understand completely that part - what is the difference between
> the domain for storage and the domain for engine you mentioned?
> 
> > second, I've noticed that the first two nodes seem to have a full copy of
> > the data (the disks are in use), but the 3rd node appears to not be using
> > any of its storage space...It is participating in the gluster cluster,
> > though.
> > 
> > Third, currently gluster shares the same network as the VM networks.  I'd
> > like to put it on its own network.  I'm not sure how to do this, as when I
> > tried to do it at install time, I never got the cluster to come online; I
> > had to make them share the same network to make that work.
> 
> I'm adding Sahina who may shed some light on the gluster question, I'd try
> on the gluster mailing list as well.
> 
> > Ovirt questions:
> > I've noticed that recently, I don't appear to be getting software updates
> > anymore.  I used to get update available notifications on my nodes every
> > few days; I haven't seen one for a couple weeks now.  is something wrong?
> > 
> > I have a windows 10 x64 VM.  I get a warning that my VM type does not
> > match the installed OS.  All works fine, but I've quadrouple-checked that
> > it does match.  Is this a known bug?

Do you have the ovirt-guest-agent installed and running on that windows VM? If 
not that is highly likely the cause for the warning.

> 
> Arik, any info on that?
> 
> > I have a UPS that all three nodes and the networking are on.  It is a USB
> > UPS.  How should I best integrate monitoring in?  I could put a raspberry
> > pi up and then run NUT or similar on it, but is there a "better" way with
> > oVirt?
> > 
> > Thanks!
> > --Jim
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-03-30 Thread Sahina Bose
On Thu, Mar 30, 2017 at 1:23 PM, Liron Aravot  wrote:

> Hi Jim, please see inline
>
> On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir  wrote:
>
>> hello:
>>
>> I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a
>> while now, and am now revisiting some aspects of it for ensuring that I
>> have good reliability.
>>
>> My cluster is a 3 node cluster, with gluster nodes running on each node.
>> After running my cluster a bit, I'm realizing I didn't do a very optimal
>> job of allocating the space on my disk to the different gluster mount
>> points.  Fortunately, they were created with LVM, so I'm hoping that I can
>> resize them without much trouble.
>>
>> I have a domain for iso, domain for export, and domain for storage, all
>> thin provisioned; then a domain for the engine, not thin provisioned.  I'd
>> like to expand the storage domain, and possibly shrink the engine domain
>> and make that space also available to the main storage domain.  Is it as
>> simple as expanding the LVM partition, or are there more steps involved?
>> Do I need to take the node offline?
>>
>
> I didn't understand completely that part - what is the difference between
> the domain for storage and the domain for engine you mentioned?
>

I think the domain for engine is the one storing Hosted Engine data.
You should be able to expand your underlying LVM partition without having
to take the node offline


>
>> second, I've noticed that the first two nodes seem to have a full copy of
>> the data (the disks are in use), but the 3rd node appears to not be using
>> any of its storage space...It is participating in the gluster cluster,
>> though.
>>
>
Is the volume created as replica 3? If so, fully copy of the data should be
present on all 3 nodes. Please provide the output of "gluster volume info"


>> Third, currently gluster shares the same network as the VM networks.  I'd
>> like to put it on its own network.  I'm not sure how to do this, as when I
>> tried to do it at install time, I never got the cluster to come online; I
>> had to make them share the same network to make that work.
>>
>
While creating the bricks the network intended for gluster should have been
used to identify the brick in hostname:brick-directory. Changing this at a
later point is a bit more involved. Please check online or on gluster-users
on changing IP address associated with brick.


>
> I'm adding Sahina who may shed some light on the gluster question, I'd try
> on the gluster mailing list as well.
>
>>
>>
>> Ovirt questions:
>> I've noticed that recently, I don't appear to be getting software updates
>> anymore.  I used to get update available notifications on my nodes every
>> few days; I haven't seen one for a couple weeks now.  is something wrong?
>>
>> I have a windows 10 x64 VM.  I get a warning that my VM type does not
>> match the installed OS.  All works fine, but I've quadrouple-checked that
>> it does match.  Is this a known bug?
>>
>
> Arik, any info on that?
>
>>
>> I have a UPS that all three nodes and the networking are on.  It is a USB
>> UPS.  How should I best integrate monitoring in?  I could put a raspberry
>> pi up and then run NUT or similar on it, but is there a "better" way with
>> oVirt?
>>
>> Thanks!
>> --Jim
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-03-30 Thread Liron Aravot
Hi Jim, please see inline

On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir  wrote:

> hello:
>
> I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a
> while now, and am now revisiting some aspects of it for ensuring that I
> have good reliability.
>
> My cluster is a 3 node cluster, with gluster nodes running on each node.
> After running my cluster a bit, I'm realizing I didn't do a very optimal
> job of allocating the space on my disk to the different gluster mount
> points.  Fortunately, they were created with LVM, so I'm hoping that I can
> resize them without much trouble.
>
> I have a domain for iso, domain for export, and domain for storage, all
> thin provisioned; then a domain for the engine, not thin provisioned.  I'd
> like to expand the storage domain, and possibly shrink the engine domain
> and make that space also available to the main storage domain.  Is it as
> simple as expanding the LVM partition, or are there more steps involved?
> Do I need to take the node offline?
>

I didn't understand completely that part - what is the difference between
the domain for storage and the domain for engine you mentioned?

>
> second, I've noticed that the first two nodes seem to have a full copy of
> the data (the disks are in use), but the 3rd node appears to not be using
> any of its storage space...It is participating in the gluster cluster,
> though.
>
> Third, currently gluster shares the same network as the VM networks.  I'd
> like to put it on its own network.  I'm not sure how to do this, as when I
> tried to do it at install time, I never got the cluster to come online; I
> had to make them share the same network to make that work.
>

I'm adding Sahina who may shed some light on the gluster question, I'd try
on the gluster mailing list as well.

>
>
> Ovirt questions:
> I've noticed that recently, I don't appear to be getting software updates
> anymore.  I used to get update available notifications on my nodes every
> few days; I haven't seen one for a couple weeks now.  is something wrong?
>
> I have a windows 10 x64 VM.  I get a warning that my VM type does not
> match the installed OS.  All works fine, but I've quadrouple-checked that
> it does match.  Is this a known bug?
>

Arik, any info on that?

>
> I have a UPS that all three nodes and the networking are on.  It is a USB
> UPS.  How should I best integrate monitoring in?  I could put a raspberry
> pi up and then run NUT or similar on it, but is there a "better" way with
> oVirt?
>
> Thanks!
> --Jim
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster and oVirt 4.0 questions

2017-03-29 Thread Jim Kusznir
hello:

I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a while
now, and am now revisiting some aspects of it for ensuring that I have good
reliability.

My cluster is a 3 node cluster, with gluster nodes running on each node.
After running my cluster a bit, I'm realizing I didn't do a very optimal
job of allocating the space on my disk to the different gluster mount
points.  Fortunately, they were created with LVM, so I'm hoping that I can
resize them without much trouble.

I have a domain for iso, domain for export, and domain for storage, all
thin provisioned; then a domain for the engine, not thin provisioned.  I'd
like to expand the storage domain, and possibly shrink the engine domain
and make that space also available to the main storage domain.  Is it as
simple as expanding the LVM partition, or are there more steps involved?
Do I need to take the node offline?

second, I've noticed that the first two nodes seem to have a full copy of
the data (the disks are in use), but the 3rd node appears to not be using
any of its storage space...It is participating in the gluster cluster,
though.

Third, currently gluster shares the same network as the VM networks.  I'd
like to put it on its own network.  I'm not sure how to do this, as when I
tried to do it at install time, I never got the cluster to come online; I
had to make them share the same network to make that work.


Ovirt questions:
I've noticed that recently, I don't appear to be getting software updates
anymore.  I used to get update available notifications on my nodes every
few days; I haven't seen one for a couple weeks now.  is something wrong?

I have a windows 10 x64 VM.  I get a warning that my VM type does not match
the installed OS.  All works fine, but I've quadrouple-checked that it does
match.  Is this a known bug?

I have a UPS that all three nodes and the networking are on.  It is a USB
UPS.  How should I best integrate monitoring in?  I could put a raspberry
pi up and then run NUT or similar on it, but is there a "better" way with
oVirt?

Thanks!
--Jim
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users