Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-04-03 Thread Sahina Bose
On Sat, Apr 1, 2017 at 10:32 PM, Jim Kusznir  wrote:

> Thank you!
>
> Here's the output of gluster volume info:
> [root@ovirt1 ~]# gluster volume info
>
> Volume Name: data
> Type: Replicate
> Volume ID: e670c488-ac16-4dd1-8bd3-e43b2e42cc59
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick2/data
> Brick2: ovirt2.nwfiber.com:/gluster/brick2/data
> Brick3: ovirt3.nwfiber.com:/gluster/brick2/data (arbiter)
> Options Reconfigured:
> performance.strict-o-direct: on
> nfs.disable: on
> user.cifs: off
> network.ping-timeout: 30
> cluster.shd-max-threads: 6
> cluster.shd-wait-qlength: 1
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> performance.low-prio-threads: 32
> features.shard-block-size: 512MB
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
> server.allow-insecure: on
>
> Volume Name: engine
> Type: Replicate
> Volume ID: 87ad86b9-d88b-457e-ba21-5d3173c612de
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick1/engine
> Brick2: ovirt2.nwfiber.com:/gluster/brick1/engine
> Brick3: ovirt3.nwfiber.com:/gluster/brick1/engine (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: export
> Type: Replicate
> Volume ID: 04ee58c7-2ba1-454f-be99-26ac75a352b4
> Status: Stopped
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick3/export
> Brick2: ovirt2.nwfiber.com:/gluster/brick3/export
> Brick3: ovirt3.nwfiber.com:/gluster/brick3/export (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
> Volume Name: iso
> Type: Replicate
> Volume ID: b1ba15f5-0f0f-4411-89d0-595179f02b92
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ovirt1.nwfiber.com:/gluster/brick4/iso
> Brick2: ovirt2.nwfiber.com:/gluster/brick4/iso
> Brick3: ovirt3.nwfiber.com:/gluster/brick4/iso (arbiter)
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: off
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-wait-qlength: 1
> cluster.shd-max-threads: 6
> network.ping-timeout: 30
> user.cifs: off
> nfs.disable: on
> performance.strict-o-direct: on
>
>
> The node marked as (arbiter) on all of the bricks is the node that is not
> using any of its disk space.
>

This is by design - the arbiter brick only stores metadata and hence saves
on storage.


>
> The engine domain is the volume dedicated for storing the hosted engine.
> Here's some LVM info:
>
>   --- Logical volume ---
>   LV Path/dev/gluster/engine
>   LV Nameengine
>   VG Namegluster
>   LV UUID4gZ1TF-a1PX-i1Qx-o4Ix-MjEf-0HD8-esm3wg
>   LV Write Accessread/write
>   LV Creation host, time ovirt1.nwfiber.com, 2016-12-31 14:40:00 -0800
>   LV Status  available
>   # open 1
>   LV Size25.00 GiB
>   Current LE   

Re: [ovirt-users] Recent Install Docs?

2017-04-03 Thread Charles Tassell

Hi Alan,

  Thanks, that's good to know.  But without the storage being imported 
into the cluster, how do I manage the HostedEngine VM? IE, if I want to 
migrate it to another host in order to do maintenance?  I see in my 
event log an entry "Failed to import the Hosted Engine Storage Domain" 
but there are no extra details as to why.  And my Virtual Machines tab 
is empty, where in my other oVirt clusters I've always been able to see 
the HostedEngine there.



On 2017-03-30 03:20 PM, Alan Griffiths wrote:
I believe that is the required setup - dedicated storage domain for 
HE. It's certainly the case on 4.0, I haven't played with 4.1 yet.


For docs you can always check the RHV set, they are generally 
applicable to oVirt.


https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1-beta/html/self-hosted_engine_guide/ 




On 30 March 2017 at 18:17, Charles Tassell > wrote:


Hello,

  Are there any more recent install docs than what's on the
website?  Those all seem to be back from the 3.x days and don't
really deal with the modern setup of using a hosted engine.

  More specifically, I've noticed that when deploying a hosted
engine I can't import the storage domain that the engine is on.
When I try I get errors like "Can't import the storage domain,
connection already exists"  Because I can't import my storage
domain, I can't initialize the cluster, so I can't really get
anywhere with the setup.  I had this same problem on 4.0 (using
4.1 now) and I "fixed" it by wiping the cluster and creating two
sets of storage: a small 50G one that holds the hosted engine, and
then another one that holds all my VMs.  I don't want to do that
with my new cluster if I can at all avoid it.


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] setting iSCSI iface.net_ifacename (netIfaceName)

2017-04-03 Thread Devin A. Bougie
Where do I set the iSCSI iface to use when connecting to both the 
hosted_storate and VM Data Domain?  I believe this is related to the difficulty 
I've had configuring iSCSI bonds within the oVirt engine as opposed to directly 
in the underlying OS.

I've set "iscsi_default_ifaces = ovirtsan" in vdsm.conf, but vdsmd still 
insists on using the default iface and vdsm.log shows:
2017-04-03 11:17:21,109-0400 INFO  (jsonrpc/5) [storage.ISCSI] iSCSI 
iface.net_ifacename not provided. Skipping. (iscsi:590)

Many thanks,
Devin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users