Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-03 Thread p...@email.cz

OK,
will extend replica 2 to replica 3 ( arbiter )  ASAP .

If is deleted "untouching" ids file on brick , healing of this file 
doesn't work .


regs.Pa.

On 3.3.2016 12:19, Nir Soffer wrote:
On Thu, Mar 3, 2016 at 11:23 AM, p...@email.cz  
mailto:p...@email.cz>> wrote:


This is replica 2, only , with following settings


Replica 2 is not supported. Even if you "fix" this now, you will have 
the same issue

soon.


Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: fixed
cluster.server-quorum-type: none
storage.owner-uid: 36
storage.owner-gid: 36
cluster.quorum-count: 1
cluster.self-heal-daemon: enable

If I'll create "ids" file manually (  eg. " sanlock direct init -s

3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0
" ) on both bricks,
vdsm is writing only to half of them ( that with 2 links = correct )
"ids" file has correct permittions, owner, size  on both bricks.
brick 1:  -rw-rw 1 vdsm kvm 1048576  2. bře 18.56
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
- not updated
brick 2:  -rw-rw 2 vdsm kvm 1048576  3. bře 10.16
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
- is continually updated

What happens when I'll restart vdsm ? Will oVirt storages go to
"disable " state ??? = disconnect VMs storages ?


 Nothing will happen, the vms will continue to run normally.

On block storage, stopping vdsm will prevent automatic extending of vm 
disks
when the disk become too full, but on file based storage (like 
gluster) there is no issue.



regs.Pa.


On 3.3.2016 02:02, Ravishankar N wrote:

On 03/03/2016 12:43 AM, Nir Soffer wrote:


PS:  # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I fix it ?? - online !


Ravi?

Is this the case in all 3 bricks of the replica?
BTW, you can just stat the file on the brick and see the link
count (it must be 2) instead of running the more expensive find
command.






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-03 Thread Nir Soffer
On Thu, Mar 3, 2016 at 11:23 AM, p...@email.cz  wrote:

> This is replica 2, only , with following settings
>

Replica 2 is not supported. Even if you "fix" this now, you will have the
same issue
soon.


>
> Options Reconfigured:
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: fixed
> cluster.server-quorum-type: none
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.quorum-count: 1
> cluster.self-heal-daemon: enable
>
> If I'll create "ids" file manually (  eg. " sanlock direct init -s
> 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0
> " ) on both bricks,
> vdsm is writing only to half of them ( that with 2 links = correct )
> "ids" file has correct permittions, owner, size  on both bricks.
> brick 1:  -rw-rw 1 vdsm kvm 1048576  2. bře 18.56
> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - not
> updated
> brick 2:  -rw-rw 2 vdsm kvm 1048576  3. bře 10.16
> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - is
> continually updated
>
> What happens when I'll restart vdsm ? Will oVirt storages go to "disable "
> state ??? = disconnect VMs storages ?
>

 Nothing will happen, the vms will continue to run normally.

On block storage, stopping vdsm will prevent automatic extending of vm disks
when the disk become too full, but on file based storage (like gluster)
there is no issue.


>
> regs.Pa.
>
>
> On 3.3.2016 02:02, Ravishankar N wrote:
>
> On 03/03/2016 12:43 AM, Nir Soffer wrote:
>
> PS:  # find /STORAGES -samefile
>> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids -print
>> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
>> = missing "shadowfile" in " .gluster " dir.
>> How can I fix it ?? - online !
>>
>
> Ravi?
>
> Is this the case in all 3 bricks of the replica?
> BTW, you can just stat the file on the brick and see the link count (it
> must be 2) instead of running the more expensive find command.
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-03 Thread Ravishankar N

On 03/03/2016 02:53 PM, p...@email.cz wrote:

This is replica 2, only , with following settings

Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: fixed

Not sure why you have set this option.
Ideally replica 3 or arbiter volumes are recommended for gluster+ovirt 
use.  (client) quorum does not make sense for a 2 node setup. I have a 
detailed write up here which explains things 
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/ 
which explains things.



cluster.server-quorum-type: none
storage.owner-uid: 36
storage.owner-gid: 36
cluster.quorum-count: 1
cluster.self-heal-daemon: enable

If I'll create "ids" file manually (  eg. " sanlock direct init -s 
3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 
" ) on both bricks,

vdsm is writing only to half of them ( that with 2 links = correct )
"ids" file has correct permittions, owner, size  on both bricks.
brick 1:  -rw-rw 1 vdsm kvm 1048576  2. bře 18.56 
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - 
not updated


Okay, so this one has link count =1 which means the .glusterfs hardlink 
is missing.  Can you try deleting this file from the brick and perform a 
stat on the file from the mount? That should heal (i.e recreate it ) on 
this brick from the other brick with the appropriate .glusterfs hard link.



brick 2:  -rw-rw 2 vdsm kvm 1048576  3. bře 10.16 
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - 
is continually updated


What happens when I'll restart vdsm ? Will oVirt storages go to 
"disable " state ??? = disconnect VMs storages ?


No idea on this one...
-Ravi


regs.Pa.

On 3.3.2016 02:02, Ravishankar N wrote:

On 03/03/2016 12:43 AM, Nir Soffer wrote:


PS:  # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I fix it ?? - online !


Ravi?

Is this the case in all 3 bricks of the replica?
BTW, you can just stat the file on the brick and see the link count 
(it must be 2) instead of running the more expensive find command.







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-03 Thread p...@email.cz

This is replica 2, only , with following settings

Options Reconfigured:
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: fixed
cluster.server-quorum-type: none
storage.owner-uid: 36
storage.owner-gid: 36
cluster.quorum-count: 1
cluster.self-heal-daemon: enable

If I'll create "ids" file manually (  eg. " sanlock direct init -s 
3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0 
" ) on both bricks,

vdsm is writing only to half of them ( that with 2 links = correct )
"ids" file has correct permittions, owner, size  on both bricks.
brick 1:  -rw-rw 1 vdsm kvm 1048576  2. bře 18.56 
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - 
not updated
brick 2:  -rw-rw 2 vdsm kvm 1048576  3. bře 10.16 
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids - 
is continually updated


What happens when I'll restart vdsm ? Will oVirt storages go to "disable 
" state ??? = disconnect VMs storages ?


regs.Pa.

On 3.3.2016 02:02, Ravishankar N wrote:

On 03/03/2016 12:43 AM, Nir Soffer wrote:


PS:  # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I fix it ?? - online !


Ravi?

Is this the case in all 3 bricks of the replica?
BTW, you can just stat the file on the brick and see the link count 
(it must be 2) instead of running the more expensive find command.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-02 Thread Ravishankar N

On 03/03/2016 12:43 AM, Nir Soffer wrote:


PS:  # find /STORAGES -samefile
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-print
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I fix it ?? - online !


Ravi?

Is this the case in all 3 bricks of the replica?
BTW, you can just stat the file on the brick and see the link count (it 
must be 2) instead of running the more expensive find command.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-02 Thread Nir Soffer
On Wed, Mar 2, 2016 at 7:48 PM, p...@email.cz  wrote:

> UPDATE:
>
> all "ids"  file have permittion fixed to 660 now
>
> #  find /STORAGES -name ids -exec ls -l {} \;
> -rw-rw 2 vdsm kvm 0 24. úno 07.41
> /STORAGES/g1r5p1/GFS/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/ids
> -rw-rw 2 vdsm kvm 0 24. úno 07.43
> /STORAGES/g1r5p2/GFS/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids
> -rw-rw 2 vdsm kvm 0 24. úno 07.43
> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
> -rw-rw 2 vdsm kvm 0 24. úno 07.44
> /STORAGES/g1r5p4/GFS/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
> -rw-rw 2 vdsm kvm 1048576 24. úno 13.03
> /STORAGES/g1r5p5/GFS/3b24d023-fd35-4666-af2f-f5e1d19531ad/dom_md/ids
> -rw-rw 2 vdsm kvm 1048576  2. bře 17.47
> /STORAGES/g2r5p1/GFS/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids
>
> SPM is and  was  running continually ...
>

You must stop vdsm on all hosts, please follow the instructions in the
previous mail.


>
> I tried to update "ids" file - ONLINE  ( offline not possible yet )
> # sanlock direct init -s
> 3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0
>
> #  find /STORAGES -name ids -exec ls -l {} \; | grep p3
> -rw-rw 1 vdsm kvm 1048576  2. bře 18.32
> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
>
> The storage ids file has correct permittions, size, owners , but is not
> checking by sanlock = the same access time
> What's wrong ??
>

sanlock will access the files when vdsm when vdsm will start the domain
monitors
when connecting to the pool.


> regs.
> Pa.
> PS:  # find /STORAGES -samefile
> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids -print
> /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
> = missing "shadowfile" in " .gluster " dir.
> How can I fix it ?? - online !
>

Ravi?


>
>
>
>
> On 2.3.2016 08:16, Ravishankar N wrote:
>
> On 03/02/2016 12:02 PM, Sahina Bose wrote:
>
>
>
> On 03/02/2016 03:45 AM, Nir Soffer wrote:
>
> On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz  wrote:
> >
> > HI,
> > requested output:
> >
> > # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
> >
> >
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:
> > total 2,1M
> > -rw-rw 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- good
> > -rw-rw 1 vdsm kvm  16M  7. lis 22.16 inbox
> > -rw-rw 1 vdsm kvm 2,0M  7. lis 22.17 leases
> > -rw-r--r-- 1 vdsm kvm  335  7. lis 22.17 metadata
> > -rw-rw 1 vdsm kvm  16M  7. lis 22.16 outbox
> >
> >
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:
> > total 1,1M
> > -rw-r--r-- 1 vdsm kvm0 24. úno 07.41 ids<-- bad (sanlock
> cannot write, other can read)
> > -rw-rw 1 vdsm kvm  16M  7. lis 00.14 inbox
> > -rw-rw 1 vdsm kvm 2,0M  7. lis 03.56 leases
> > -rw-r--r-- 1 vdsm kvm  333  7. lis 03.56 metadata
> > -rw-rw 1 vdsm kvm  16M  7. lis 00.14 outbox
> >
> >
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:
> > total 1,1M
> > -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids<-- bad (sanlock
> cannot write, other can read)
> > -rw-rw 1 vdsm kvm  16M  7. lis 00.15 inbox
> > -rw-rw 1 vdsm kvm 2,0M  7. lis 22.14 leases
> > -rw-r--r-- 1 vdsm kvm  333  7. lis 22.14 metadata
> > -rw-rw 1 vdsm kvm  16M  7. lis 00.15 outbox
> >
> >
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:
> > total 1,1M
> > -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids<-- bad (sanlock
> cannot write, other can read)
> > -rw-rw 1 vdsm kvm  16M 23. úno 22.51 inbox
> > -rw-rw 1 vdsm kvm 2,0M 23. úno 23.12 leases
> > -rw-r--r-- 1 vdsm kvm  998 25. úno 00.35 metadata
> > -rw-rw 1 vdsm kvm  16M  7. lis 00.16 outbox
> >
> >
> /rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:
> > total 1,1M
> > -rw-r--r-- 1 vdsm kvm0 24. úno 07.44 ids<-- bad (sanlock
> cannot write, other can read)
> > -rw-rw 1 vdsm kvm  16M  7. lis 00.17 inbox
> > -rw-rw 1 vdsm kvm 2,0M  7. lis 00.18 leases
> > -rw-r--r-- 1 vdsm kvm  333  7. lis 00.18 metadata
> > -rw-rw 1 vdsm kvm  16M  7. lis 00.17 outbox
> >
> >
> /rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:
> > total 1,1M
> > -rw-rw-r-- 1 vdsm kvm0 24. úno 07.32 ids<-- bad (other can
> read)
> > -rw-rw 1 vdsm kvm  16M  7. lis 22.18 inbox
> > -rw-rw 1 vdsm kvm 2,0M  7. lis 22.18 leases
> > -rw-r--r-- 1 vdsm kvm  333  7. lis 22.18 metadata
> > -rw-rw 1 vdsm kvm  16M  7. lis 22.18 outbox
> >
> >
> /rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:
> > total 3,0M
> > -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids<-- bad (other can
> read)
> > -rw

Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-02 Thread p...@email.cz

UPDATE:

all "ids"  file have permittion fixed to 660 now

#  find /STORAGES -name ids -exec ls -l {} \;
-rw-rw 2 vdsm kvm 0 24. úno 07.41 
/STORAGES/g1r5p1/GFS/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/ids
-rw-rw 2 vdsm kvm 0 24. úno 07.43 
/STORAGES/g1r5p2/GFS/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/ids
-rw-rw 2 vdsm kvm 0 24. úno 07.43 
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
-rw-rw 2 vdsm kvm 0 24. úno 07.44 
/STORAGES/g1r5p4/GFS/7f52b697-c199-4f58-89aa-102d44327124/dom_md/ids
-rw-rw 2 vdsm kvm 1048576 24. úno 13.03 
/STORAGES/g1r5p5/GFS/3b24d023-fd35-4666-af2f-f5e1d19531ad/dom_md/ids
-rw-rw 2 vdsm kvm 1048576  2. bře 17.47 
/STORAGES/g2r5p1/GFS/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/ids


SPM is and  was  running continually ...

I tried to update "ids" file - ONLINE  ( offline not possible yet )
# sanlock direct init -s 
3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0


#  find /STORAGES -name ids -exec ls -l {} \; | grep p3
-rw-rw 1 vdsm kvm 1048576  2. bře 18.32 
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids


The storage ids file has correct permittions, size, owners , but is not 
checking by sanlock = the same access time

What's wrong ??

regs.
Pa.
PS:  # find /STORAGES -samefile 
/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids -print

/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
= missing "shadowfile" in " .gluster " dir.
How can I fix it ?? - online !



On 2.3.2016 08:16, Ravishankar N wrote:

On 03/02/2016 12:02 PM, Sahina Bose wrote:



On 03/02/2016 03:45 AM, Nir Soffer wrote:

On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz  wrote:
>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:

> total 2,1M
> -rw-rw 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- good
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.17 leases
> -rw-r--r-- 1 vdsm kvm  335  7. lis 22.17 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.41 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 03.56 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 03.56 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.14 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.14 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.51 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 23.12 leases
> -rw-r--r-- 1 vdsm kvm  998 25. úno 00.35 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.44 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 00.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 00.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.32 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M 25. úno 00.42 inbox
> -rw-rw 1 vdsm kvm 2,0M 25. úno 00.44 leases
> -rw-r--r-- 1 vdsm kvm  997 24. úno 02.46 metadata
> -rw-rw 1 vdsm kvm  16M 25. úno 00.44 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:

> total 2,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.34 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.35 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 22.38 lea

Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-02 Thread p...@email.cz

Yes we have had "ids" split brains + some other VM's files
Split brains was fixed by healing with preffered ( source ) brick.

eg: " # gluster volume heal 1KVM12-P1 split-brain source-brick 
16.0.0.161:/STORAGES/g1r5p1/GFS "


Pavel


Okay, so what I understand from the output above is you have different 
gluster volumes mounted and some of them have incorrect permissions 
for the 'ids' file. The way to fix it is to do it from the mount like 
Nir said.
Why did you delete the file from the .glusterfs in the brick(s)? Was 
there a gfid split brain?


-Ravi



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-02 Thread p...@email.cz

Hi guys,
thx a lot for your support ...at first.

Because we had been under huge time pressure, we found "google 
workaround"  which delete both files . It helped, probabbly at first 
steps of recover .
eg: " #  find /STORAGES/g1r5p5/GFS/ -samefile 
/STORAGES/g1r5p5/GFS/3da46e07-d1ea-4f10-9250-6cbbb7b94d80/dom_md/ids 
-print -delete "


-->
Well at first I'll  fix permittions from mount points  to 660 .
If "ids"  file will be writeable , can't  became gluster colaps ??

regs.Pavel


On 2.3.2016 08:16, Ravishankar N wrote:

On 03/02/2016 12:02 PM, Sahina Bose wrote:



On 03/02/2016 03:45 AM, Nir Soffer wrote:

On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz  wrote:
>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:

> total 2,1M
> -rw-rw 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- good
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.17 leases
> -rw-r--r-- 1 vdsm kvm  335  7. lis 22.17 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.41 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 03.56 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 03.56 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.14 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.14 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.51 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 23.12 leases
> -rw-r--r-- 1 vdsm kvm  998 25. úno 00.35 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.44 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 00.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 00.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.32 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M 25. úno 00.42 inbox
> -rw-rw 1 vdsm kvm 2,0M 25. úno 00.44 leases
> -rw-r--r-- 1 vdsm kvm  997 24. úno 02.46 metadata
> -rw-rw 1 vdsm kvm  16M 25. úno 00.44 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:

> total 2,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.34 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.35 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 22.38 leases
> -rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata
> -rw-rw 1 vdsm kvm  16M 23. úno 22.27 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- bad (other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 23.50 inbox  <-- bad (other can 
read)
> -rw-rw-r-- 1 vdsm kvm 2,0M  6. lis 23.51 leases  <-- bad 
(other can read)
> -rw-rw-r-- 1 vdsm kvm  734  7. lis 02.13 metadata<-- bad 
(group can write, other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 16.55 outbox  <-- bad (other 
can read)

>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.35 ids  <-- bad (other can read)
> -rw-rw-r-- 1 vdsm kvm  16M 24. úno 01.06 inbox
> -rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases
> -rw-r--r-- 1 vdsm kvm  998 24. úno 19.07 metadata
> -rw-rw-r-- 1 vdsm kvm  16M  7. lis 22.20 outbox


It should look like this:

-rw-rw. 1 vdsm kvm 1.0M Ma

Re: [ovirt-users] [Gluster-users] open error -13 = sanlock

2016-03-01 Thread Ravishankar N

On 03/02/2016 12:02 PM, Sahina Bose wrote:



On 03/02/2016 03:45 AM, Nir Soffer wrote:
On Tue, Mar 1, 2016 at 10:51 PM, p...@email.cz > wrote:

>
> HI,
> requested output:
>
> # ls -lh /rhev/data-center/mnt/glusterSD/localhost:*/*/dom_md
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md:

> total 2,1M
> -rw-rw 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- good
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.17 leases
> -rw-r--r-- 1 vdsm kvm  335  7. lis 22.17 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.41 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 03.56 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 03.56 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.14 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.14 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.14 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.15 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.43 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.51 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 23.12 leases
> -rw-r--r-- 1 vdsm kvm  998 25. úno 00.35 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.16 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P4/7f52b697-c199-4f58-89aa-102d44327124/dom_md:

> total 1,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.44 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 00.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 00.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 00.17 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.32 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 inbox
> -rw-rw 1 vdsm kvm 2,0M  7. lis 22.18 leases
> -rw-r--r-- 1 vdsm kvm  333  7. lis 22.18 metadata
> -rw-rw 1 vdsm kvm  16M  7. lis 22.18 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P2/ff71b47b-0f72-4528-9bfe-c3da888e47f0/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- bad (other can read)
> -rw-rw 1 vdsm kvm  16M 25. úno 00.42 inbox
> -rw-rw 1 vdsm kvm 2,0M 25. úno 00.44 leases
> -rw-r--r-- 1 vdsm kvm  997 24. úno 02.46 metadata
> -rw-rw 1 vdsm kvm  16M 25. úno 00.44 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P3/ef010d08-aed1-41c4-ba9a-e6d9bdecb4b4/dom_md:

> total 2,1M
> -rw-r--r-- 1 vdsm kvm0 24. úno 07.34 ids  <-- bad (sanlock 
cannot write, other can read)

> -rw-rw 1 vdsm kvm  16M 23. úno 22.35 inbox
> -rw-rw 1 vdsm kvm 2,0M 23. úno 22.38 leases
> -rw-r--r-- 1 vdsm kvm 1,1K 24. úno 19.07 metadata
> -rw-rw 1 vdsm kvm  16M 23. úno 22.27 outbox
>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md:

> total 3,0M
> -rw-rw-r-- 1 vdsm kvm 1,0M  1. bře 21.28 ids  <-- bad (other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 23.50 inbox  <-- bad (other can 
read)
> -rw-rw-r-- 1 vdsm kvm 2,0M  6. lis 23.51 leases<-- bad (other 
can read)
> -rw-rw-r-- 1 vdsm kvm  734  7. lis 02.13 metadata  <-- bad (group 
can write, other can read)
> -rw-rw-r-- 1 vdsm kvm  16M  6. lis 16.55 outbox  <-- bad (other can 
read)

>
> 
/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md:

> total 1,1M
> -rw-rw-r-- 1 vdsm kvm0 24. úno 07.35 ids  <-- bad (other can read)
> -rw-rw-r-- 1 vdsm kvm  16M 24. úno 01.06 inbox
> -rw-rw-r-- 1 vdsm kvm 2,0M 24. úno 02.44 leases
> -rw-r--r-- 1 vdsm kvm  998 24. úno 19.07 metadata
> -rw-rw-r-- 1 vdsm kvm  16M  7. lis 22.20 outbox


It should look like this:

-rw-rw. 1 vdsm kvm 1.0M Mar  1 23:36 ids
-rw-rw. 1 vdsm kvm 2.0M Mar  1 23:35 leases
-rw-r--r--. 1 vdsm kvm  353 Mar  1 23:35 metadata
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 outbox
-rw-rw. 1 vdsm kvm  16M Mar  1 23:34 inbox

This explains the EACCES error.

You can start by fixing the permissions manually, you can do this online.

>  The ids files was generated by "touch" command after deleting them 
due "sanlock locking hang"  gluster crash & reboot
> I expected that they will be filled automaticaly after gluster 
reboot ( the  shadow copy fro