Re: [ovirt-users] ovirt 3.6 and gluster arbiter volumes?

2016-01-27 Thread arik . mitschang
Hi Nir,

> On Wed, Jan 20, 2016 at 1:31 AM,   wrote:
>>> On 25-12-2015 5:26, Arik Mitschang wrote:
 Hi ovirt-users,

 I have been working on a new install of ovirt 3.6 hosted-engine and ran
 into difficulty adding a gluster data storage domain to host my VMs. I
 have 4 servers for gluster (separate from vm hosts) and would like to
 have the quorum enforcement of replica 3 without sacrificing space. I
 created a gluster using

  replica 3 arbiter 1

 That looks like this:

  Volume Name: arbtest
  Type: Distributed-Replicate
  Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
  Status: Stopped
  Number of Bricks: 2 x 3 = 6
  Transport-type: tcp
  Bricks:
  Brick1: t2-gluster01b:/gluster/00/arbtest
  Brick2: t2-gluster02b:/gluster/00/arbtest
  Brick3: t2-gluster03b:/gluster/00/arbtest.arb
  Brick4: t2-gluster03b:/gluster/00/arbtest
  Brick5: t2-gluster04b:/gluster/00/arbtest
  Brick6: t2-gluster01b:/gluster/00/arbtest.arb
  Options Reconfigured:
  nfs.disable: true
  network.ping-timeout: 10
  storage.owner-uid: 36
  storage.owner-gid: 36
  cluster.server-quorum-type: server
  cluster.quorum-type: auto
  network.remote-dio: enable
  cluster.eager-lock: enable
  performance.stat-prefetch: off
  performance.io-cache: off
  performance.read-ahead: off
  performance.quick-read: off
  performance.readdir-ahead: on

 But adding to gluster I get the following error:

  "Error while executing action AddGlusterFsStorageDomain: Error creating
  a storage domain's metadata"
> 
> In vdsm log we see:
> 
> StorageDomainMetadataCreationError: Error creating a storage domain's
> metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
> error",)
> 
> Which does not mean much.
> 


>>> Anything in engine.log (/var/log/ovirt-engine/engine.log) around that time?
>>> Anything in vdsm.log on your 2 hypervisors around that time?
>>> (Guessing that you'll see an error about replication unsupported by
>>> vdsm, if so, have a look at /etc/vdsmd.conf.rpmnew)
>>
>> Hi Joop,
>>
>> Thanks for your response, and sorry for the long delay in mine. I had a
>> chance to test adding again and catch the logs around that operation. I
>> am attaching the engine logs and vdsm logs of the hypervisor that was
>> responsible for the storage operations.
>>
>> Also, I have the following:
>>
>>  [gluster]
>>  allowed_replica_counts = 1,2,3
>>
>> in /etc/vdsm/vdsm.conf.
>>
>> The volume was successfully mounted and I see the following in it after
>> trying to add:
>>
>>  arik@t2-virt01:~$ sudo mount -t glusterfs t2-gluster01b:arbtest /mnt/
>>  arik@t2-virt01:~$ ls -ltr /mnt/
>>  total 0
>>  -rwxr-xr-x 1 vdsm kvm  0 Jan 20 08:08 __DIRECT_IO_TEST__
>>  drwxr-xr-x 3 vdsm kvm 54 Jan 20 08:08 3d31af0b-18ad-45c4-90f1-18e2f804f34b
>>
>> I hope you can see something interesting in these logs!
> 
> You may find more info in gluster mount log, which should be at:
> 
> /var/log/glusterfs/:.log

I will take a look if I can find something in the gluster logs.

> 
> We (ovirt storage developers) did not try arbiter volumes yet, so this
> is basically
> unsupported :-)

Ah, understood. Any plans to try?

> 
> The recommended setup is replica 3. Can you try to create a small
> replica 3 volume,
> just to check that replica 3 works for you?

The hosted engine for our setup is on a replica 3 volume, and this works
well.

Regards,
-Arik
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6 and gluster arbiter volumes?

2016-01-26 Thread Nir Soffer
On Wed, Jan 20, 2016 at 1:31 AM,   wrote:
>> On 25-12-2015 5:26, Arik Mitschang wrote:
>>> Hi ovirt-users,
>>>
>>> I have been working on a new install of ovirt 3.6 hosted-engine and ran
>>> into difficulty adding a gluster data storage domain to host my VMs. I
>>> have 4 servers for gluster (separate from vm hosts) and would like to
>>> have the quorum enforcement of replica 3 without sacrificing space. I
>>> created a gluster using
>>>
>>>  replica 3 arbiter 1
>>>
>>> That looks like this:
>>>
>>>  Volume Name: arbtest
>>>  Type: Distributed-Replicate
>>>  Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
>>>  Status: Stopped
>>>  Number of Bricks: 2 x 3 = 6
>>>  Transport-type: tcp
>>>  Bricks:
>>>  Brick1: t2-gluster01b:/gluster/00/arbtest
>>>  Brick2: t2-gluster02b:/gluster/00/arbtest
>>>  Brick3: t2-gluster03b:/gluster/00/arbtest.arb
>>>  Brick4: t2-gluster03b:/gluster/00/arbtest
>>>  Brick5: t2-gluster04b:/gluster/00/arbtest
>>>  Brick6: t2-gluster01b:/gluster/00/arbtest.arb
>>>  Options Reconfigured:
>>>  nfs.disable: true
>>>  network.ping-timeout: 10
>>>  storage.owner-uid: 36
>>>  storage.owner-gid: 36
>>>  cluster.server-quorum-type: server
>>>  cluster.quorum-type: auto
>>>  network.remote-dio: enable
>>>  cluster.eager-lock: enable
>>>  performance.stat-prefetch: off
>>>  performance.io-cache: off
>>>  performance.read-ahead: off
>>>  performance.quick-read: off
>>>  performance.readdir-ahead: on
>>>
>>> But adding to gluster I get the following error:
>>>
>>>  "Error while executing action AddGlusterFsStorageDomain: Error creating
>>>  a storage domain's metadata"

In vdsm log we see:

StorageDomainMetadataCreationError: Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
error",)

Which does not mean much.

>>>
>>>
>> Anything in engine.log (/var/log/ovirt-engine/engine.log) around that time?
>> Anything in vdsm.log on your 2 hypervisors around that time?
>> (Guessing that you'll see an error about replication unsupported by
>> vdsm, if so, have a look at /etc/vdsmd.conf.rpmnew)
>
> Hi Joop,
>
> Thanks for your response, and sorry for the long delay in mine. I had a
> chance to test adding again and catch the logs around that operation. I
> am attaching the engine logs and vdsm logs of the hypervisor that was
> responsible for the storage operations.
>
> Also, I have the following:
>
>  [gluster]
>  allowed_replica_counts = 1,2,3
>
> in /etc/vdsm/vdsm.conf.
>
> The volume was successfully mounted and I see the following in it after
> trying to add:
>
>  arik@t2-virt01:~$ sudo mount -t glusterfs t2-gluster01b:arbtest /mnt/
>  arik@t2-virt01:~$ ls -ltr /mnt/
>  total 0
>  -rwxr-xr-x 1 vdsm kvm  0 Jan 20 08:08 __DIRECT_IO_TEST__
>  drwxr-xr-x 3 vdsm kvm 54 Jan 20 08:08 3d31af0b-18ad-45c4-90f1-18e2f804f34b
>
> I hope you can see something interesting in these logs!

You may find more info in gluster mount log, which should be at:

/var/log/glusterfs/:.log

We (ovirt storage developers) did not try arbiter volumes yet, so this
is basically
unsupported :-)

The recommended setup is replica 3. Can you try to create a small
replica 3 volume,
just to check that replica 3 works for you?

Adding Sahina.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 3.6 and gluster arbiter volumes?

2016-01-26 Thread arik . mitschang
> On 25-12-2015 5:26, Arik Mitschang wrote:
>> Hi ovirt-users,
>>
>> I have been working on a new install of ovirt 3.6 hosted-engine and ran
>> into difficulty adding a gluster data storage domain to host my VMs. I
>> have 4 servers for gluster (separate from vm hosts) and would like to
>> have the quorum enforcement of replica 3 without sacrificing space. I
>> created a gluster using
>>
>>  replica 3 arbiter 1
>>
>> That looks like this:
>>
>>  Volume Name: arbtest
>>  Type: Distributed-Replicate
>>  Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
>>  Status: Stopped
>>  Number of Bricks: 2 x 3 = 6
>>  Transport-type: tcp
>>  Bricks:
>>  Brick1: t2-gluster01b:/gluster/00/arbtest
>>  Brick2: t2-gluster02b:/gluster/00/arbtest
>>  Brick3: t2-gluster03b:/gluster/00/arbtest.arb
>>  Brick4: t2-gluster03b:/gluster/00/arbtest
>>  Brick5: t2-gluster04b:/gluster/00/arbtest
>>  Brick6: t2-gluster01b:/gluster/00/arbtest.arb
>>  Options Reconfigured:
>>  nfs.disable: true
>>  network.ping-timeout: 10
>>  storage.owner-uid: 36
>>  storage.owner-gid: 36
>>  cluster.server-quorum-type: server
>>  cluster.quorum-type: auto
>>  network.remote-dio: enable
>>  cluster.eager-lock: enable
>>  performance.stat-prefetch: off
>>  performance.io-cache: off
>>  performance.read-ahead: off
>>  performance.quick-read: off
>>  performance.readdir-ahead: on
>>
>> But adding to gluster I get the following error:
>>
>>  "Error while executing action AddGlusterFsStorageDomain: Error creating
>>  a storage domain's metadata"
>>
>>
> Anything in engine.log (/var/log/ovirt-engine/engine.log) around that time?
> Anything in vdsm.log on your 2 hypervisors around that time?
> (Guessing that you'll see an error about replication unsupported by
> vdsm, if so, have a look at /etc/vdsmd.conf.rpmnew)

Hi Joop,

Thanks for your response, and sorry for the long delay in mine. I had a
chance to test adding again and catch the logs around that operation. I
am attaching the engine logs and vdsm logs of the hypervisor that was
responsible for the storage operations.

Also, I have the following:

 [gluster]
 allowed_replica_counts = 1,2,3

in /etc/vdsm/vdsm.conf.

The volume was successfully mounted and I see the following in it after
trying to add:

 arik@t2-virt01:~$ sudo mount -t glusterfs t2-gluster01b:arbtest /mnt/
 arik@t2-virt01:~$ ls -ltr /mnt/
 total 0
 -rwxr-xr-x 1 vdsm kvm  0 Jan 20 08:08 __DIRECT_IO_TEST__
 drwxr-xr-x 3 vdsm kvm 54 Jan 20 08:08 3d31af0b-18ad-45c4-90f1-18e2f804f34b

I hope you can see something interesting in these logs!

Regards,
-Arik

ENGINE logs:

2016-01-20 08:08:36,492 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (default 
task-11) [5021a29d] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[t2-gluster01b:arbtest=]', sharedLocks='null'}'
2016-01-20 08:08:36,506 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (default 
task-11) [5021a29d] Running command: AddStorageServerConnectionCommand 
internal: false. Entities affected :  ID: aaa0----123456789aaa 
Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2016-01-20 08:08:36,507 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-11) [5021a29d] START, ConnectStorageServerVDSCommand(HostName = 
t2-virt02, StorageServerConnectionManagementVDSParameters:{runAsync='true', 
hostId='757bf4c3-5352-4d00-aa4a-651c8e0ffe34', 
storagePoolId='----', storageType='GLUSTERFS', 
connectionList='[StorageServerConnections:{id='null', 
connection='t2-gluster01b:arbtest', iqn='null', vfsType='glusterfs', 
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null', 
iface='null', netIfaceName='null'}]'}), log id: 74e8ac0a
2016-01-20 08:08:36,745 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-11) [5021a29d] FINISH, ConnectStorageServerVDSCommand, return: 
{----=0}, log id: 74e8ac0a
2016-01-20 08:08:36,757 INFO  
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (default 
task-11) [5021a29d] Lock freed to object 
'EngineLock:{exclusiveLocks='[t2-gluster01b:arbtest=]', sharedLocks='null'}'
2016-01-20 08:08:36,812 INFO  
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (default 
task-30) [6eb6b10] Running command: AddGlusterFsStorageDomainCommand internal: 
false. Entities affected :  ID: aaa0----123456789aaa Type: 
SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2016-01-20 08:08:36,825 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(default task-30) [6eb6b10] START, ConnectStorageServerVDSCommand(HostName = 
t2-virt02, StorageServerConnectionManagementVDSParameters:{runAsync='true', 

Re: [ovirt-users] ovirt 3.6 and gluster arbiter volumes?

2016-01-07 Thread Joop
On 25-12-2015 5:26, Arik Mitschang wrote:
> Hi ovirt-users,
>
> I have been working on a new install of ovirt 3.6 hosted-engine and ran
> into difficulty adding a gluster data storage domain to host my VMs. I
> have 4 servers for gluster (separate from vm hosts) and would like to
> have the quorum enforcement of replica 3 without sacrificing space. I
> created a gluster using
>
>  replica 3 arbiter 1
>
> That looks like this:
>
>  Volume Name: arbtest
>  Type: Distributed-Replicate
>  Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
>  Status: Stopped
>  Number of Bricks: 2 x 3 = 6
>  Transport-type: tcp
>  Bricks:
>  Brick1: t2-gluster01b:/gluster/00/arbtest
>  Brick2: t2-gluster02b:/gluster/00/arbtest
>  Brick3: t2-gluster03b:/gluster/00/arbtest.arb
>  Brick4: t2-gluster03b:/gluster/00/arbtest
>  Brick5: t2-gluster04b:/gluster/00/arbtest
>  Brick6: t2-gluster01b:/gluster/00/arbtest.arb
>  Options Reconfigured:
>  nfs.disable: true
>  network.ping-timeout: 10
>  storage.owner-uid: 36
>  storage.owner-gid: 36
>  cluster.server-quorum-type: server
>  cluster.quorum-type: auto
>  network.remote-dio: enable
>  cluster.eager-lock: enable
>  performance.stat-prefetch: off
>  performance.io-cache: off
>  performance.read-ahead: off
>  performance.quick-read: off
>  performance.readdir-ahead: on
>
> But adding to gluster I get the following error:
>
>  "Error while executing action AddGlusterFsStorageDomain: Error creating
>  a storage domain's metadata"
>
>
Anything in engine.log (/var/log/ovirt-engine/engine.log) around that time?
Anything in vdsm.log on your 2 hypervisors around that time?
(Guessing that you'll see an error about replication unsupported by
vdsm, if so, have a look at /etc/vdsmd.conf.rpmnew)

Regards,

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users