Re: [Gluster-users] Slow write times to gluster disk

2017-08-07 Thread Soumya Koduri


- Original Message -
> From: "Pat Haley" 
> To: "Soumya Koduri" , gluster-users@gluster.org, "Pranith 
> Kumar Karampuri" 
> Cc: "Ben Turner" , "Ravishankar N" 
> , "Raghavendra Gowdappa"
> , "Niels de Vos" , "Steve Postma" 
> 
> Sent: Monday, August 7, 2017 9:52:48 PM
> Subject: Re: [Gluster-users] Slow write times to gluster disk
> 
> 
> Hi Soumya,
> 
> We just had the opportunity to try the option of disabling the
> kernel-NFS and restarting glusterd to start gNFS.  However the gluster
> demon crashes immediately on startup.  What additional information
> besides what we provide below would help debugging this?
> 

Which version of glusterfs are you using? There were few regressions caused 
(all fixed now in master branch atleast) by recent changes in mount codepath.

Request Niels to comment.

Thanks,
Soumya


> Thanks,
> 
> Pat
> 
> 
>  Forwarded Message 
> Subject:  gluster-nfs crashing on start
> Date: Mon, 7 Aug 2017 16:05:09 +
> From: Steve Postma 
> To:   Pat Haley 
> 
> 
> 
> *To disable kernal-nfs and enable nfs through Gluster we:*
> 
> 
> gluster volume set data-volume nfs.export-volumes on
> gluster volume set data-volume nfs.disable off
> 
> /etc/init.d/glusterd stop
> 
> 
> service nfslock stop
> 
> service rpcgssd stop
> 
> service rpcidmapd stop
> 
> service portmap stop
> 
> service nfs stop
> 
> 
> /etc/init.d/glusterd stop
> 
> 
> 
> 
> *the /var/log/glusterfs/nfs.log immediately reports a crash:*
> 
> *
> *
> 
> [root@mseas-data2 glusterfs]# cat nfs.log
> 
> [2017-08-07 15:20:16.327026] I [MSGID: 100030] [glusterfsd.c:2332:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
> 3.7.11 (args: /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs
> -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
> /var/run/gluster/7db74f19472511d20849e471bf224c1a.socket)
> 
> [2017-08-07 15:20:16.345166] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> 
> [2017-08-07 15:20:16.351290] I
> [rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service:
> Configured rpc.outstanding-rpc-limit with value 16
> 
> pending frames:
> 
> frame : type(0) op(0)
> 
> patchset: git://git.gluster.com/glusterfs.git
> 
> signal received: 11
> 
> time of crash:
> 
> 2017-08-07 15:20:17
> 
> configuration details:
> 
> argp 1
> 
> backtrace 1
> 
> dlfcn 1
> 
> libpthread 1
> 
> llistxattr 1
> 
> setfsid 1
> 
> spinlock 1
> 
> epoll.h 1
> 
> xattr.h 1
> 
> st_atim.tv_nsec 1
> 
> package-string: glusterfs 3.7.11
> 
> /usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb8)[0x3889625a18]
> 
> /usr/lib64/libglusterfs.so.0(gf_print_trace+0x32f)[0x38896456af]
> 
> /lib64/libc.so.6[0x34a1c32660]
> 
> /lib64/libc.so.6[0x34a1d3382f]
> 
> /usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(+0x53307)[0x7f8d071b3307]
> 
> /usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(exp_file_parse+0x302)[0x7f8d071b3742]
> 
> /usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(mnt3_auth_set_exports_auth+0x45)[0x7f8d071b47a5]
> 
> /usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(_mnt3_init_auth_params+0x91)[0x7f8d07183e41]
> 
> /usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(mnt3svc_init+0x218)[0x7f8d07184228]
> 
> /usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(nfs_init_versions+0xd7)[0x7f8d07174a37]
> 
> /usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(init+0x77)[0x7f8d071767c7]
> 
> /usr/lib64/libglusterfs.so.0(xlator_init+0x52)[0x3889622a82]
> 
> /usr/lib64/libglusterfs.so.0(glusterfs_graph_init+0x31)[0x3889669aa1]
> 
> /usr/lib64/libglusterfs.so.0(glusterfs_graph_activate+0x57)[0x3889669bd7]
> 
> /usr/sbin/glusterfs(glusterfs_process_volfp+0xed)[0x405c0d]
> 
> /usr/sbin/glusterfs(mgmt_getspec_cbk+0x312)[0x40dbd2]
> 
> /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x3889e0f7b5]
> 
> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1a1)[0x3889e10891]
> 
> /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x3889e0bbd8]
> 
> /usr/lib64/glusterfs/3.7.11/rpc-transport/socket.so(+0x94cd)[0x7f8d088e04cd]
> 
> /usr/lib64/glusterfs/3.7.11/rpc-transport/socket.so(+0xa79d)[0x7f8d088e179d]
> 
> /usr/lib64/libglusterfs.so.0[0x388968b2f0]
> 
> /lib64/libpthread.so.0[0x34a2007aa1]
> 
> /lib64/libc.so.6(clone+0x6d)[0x34a1ce8aad]
> 
> 
> 
> 
> 
> 
> [root@mseas-data2 glusterfs]# gluster volume info
> 
> Volume Name: data-volume
> 
> Type: Distribute
> 
> Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
> 
> Status: Started
> 
> Number of Bricks: 2
> 
> Transport-type: tcp
> 
> Bricks:
> 
> Brick1: mseas-data2:/mnt/brick1
> 
> Brick2: mseas-data2:/mnt/brick2
> 
> Options Reconfigured:
> 
> nfs.export-volumes: on
> 
> nfs.disable: off
> 
> performance.readdir-ahead: on
> 
> diagnostics.brick-sys-log-level: 

Re: [Gluster-users] How to delete geo-replication session?

2017-08-07 Thread Aravinda
Do you see any session listed when Geo-replication status command is 
run(without any volume name)


gluster volume geo-replication status

Volume stop force should work even if Geo-replication session exists. 
From the error it looks like node "arbiternode.domain.tld" in Master 
cluster is down or not reachable.


regards
Aravinda VK

On 08/07/2017 10:01 PM, mabi wrote:

Hi,

I would really like to get rid of this geo-replication session as I am 
stuck with it right now. For example I can't even stop my volume as it 
complains about that geo-replcation...


Can someone let me know how I can delete it?

Thanks




 Original Message 
Subject: How to delete geo-replication session?
Local Time: August 1, 2017 12:15 PM
UTC Time: August 1, 2017 10:15 AM
From: m...@protonmail.ch
To: Gluster Users 

Hi,

I would like to delete a geo-replication session on my GluterFS 
3.8.11 replicat 2 volume in order to re-create it. Unfortunately the 
"delete" command does not work as you can see below:


$ sudo gluster volume geo-replication myvolume 
gfs1geo.domain.tld::myvolume-geo delete


Staging failed on arbiternode.domain.tld. Error: Geo-replication 
session between myvolume and arbiternode.domain.tld::myvolume-geo 
does not exist.

geo-replication command failed

I also tried with "force" but no luck here either:

$ sudo gluster volume geo-replication myvolume 
gfs1geo.domain.tld::myvolume-geo delete force


Usage: volume geo-replication [] [] {create 
[[ssh-port n] [[no-verify]|[push-pem]]] [force]|start [force]|stop 
[force]|pause [force]|resume [force]|config|status [detail]|delete 
[reset-sync-time]} [options...]




So how can I delete my geo-replication session manually?

Mind that I do not want to reset-sync-time, I would like to delete it 
and re-create it so that it continues to geo replicate where it left 
from.


Thanks,
M.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to delete geo-replication session?

2017-08-07 Thread mabi
Hi,

I would really like to get rid of this geo-replication session as I am stuck 
with it right now. For example I can't even stop my volume as it complains 
about that geo-replcation...
Can someone let me know how I can delete it?
Thanks

>  Original Message 
> Subject: How to delete geo-replication session?
> Local Time: August 1, 2017 12:15 PM
> UTC Time: August 1, 2017 10:15 AM
> From: m...@protonmail.ch
> To: Gluster Users 
> Hi,
> I would like to delete a geo-replication session on my GluterFS 3.8.11 
> replicat 2 volume in order to re-create it. Unfortunately the "delete" 
> command does not work as you can see below:
> $ sudo gluster volume geo-replication myvolume 
> gfs1geo.domain.tld::myvolume-geo delete
> Staging failed on arbiternode.domain.tld. Error: Geo-replication session 
> between myvolume and arbiternode.domain.tld::myvolume-geo does not exist.
> geo-replication command failed
> I also tried with "force" but no luck here either:
> $ sudo gluster volume geo-replication myvolume 
> gfs1geo.domain.tld::myvolume-geo delete force
> Usage: volume geo-replication [] [] {create [[ssh-port n] 
> [[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause 
> [force]|resume [force]|config|status [detail]|delete [reset-sync-time]} 
> [options...]
> So how can I delete my geo-replication session manually?
> Mind that I do not want to reset-sync-time, I would like to delete it and 
> re-create it so that it continues to geo replicate where it left from.
> Thanks,
> M.___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Slow write times to gluster disk

2017-08-07 Thread Pat Haley


Hi Soumya,

We just had the opportunity to try the option of disabling the 
kernel-NFS and restarting glusterd to start gNFS.  However the gluster 
demon crashes immediately on startup.  What additional information 
besides what we provide below would help debugging this?


Thanks,

Pat


 Forwarded Message 
Subject:gluster-nfs crashing on start
Date:   Mon, 7 Aug 2017 16:05:09 +
From:   Steve Postma 
To: Pat Haley 



*To disable kernal-nfs and enable nfs through Gluster we:*


gluster volume set data-volume nfs.export-volumes on
gluster volume set data-volume nfs.disable off

/etc/init.d/glusterd stop


service nfslock stop

service rpcgssd stop

service rpcidmapd stop

service portmap stop

service nfs stop


/etc/init.d/glusterd stop




*the /var/log/glusterfs/nfs.log immediately reports a crash:*

*
*

[root@mseas-data2 glusterfs]# cat nfs.log

[2017-08-07 15:20:16.327026] I [MSGID: 100030] [glusterfsd.c:2332:main] 
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 
3.7.11 (args: /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs 
-p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S 
/var/run/gluster/7db74f19472511d20849e471bf224c1a.socket)


[2017-08-07 15:20:16.345166] I [MSGID: 101190] 
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread 
with index 1


[2017-08-07 15:20:16.351290] I 
[rpcsvc.c:2215:rpcsvc_set_outstanding_rpc_limit] 0-rpc-service: 
Configured rpc.outstanding-rpc-limit with value 16


pending frames:

frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git

signal received: 11

time of crash:

2017-08-07 15:20:17

configuration details:

argp 1

backtrace 1

dlfcn 1

libpthread 1

llistxattr 1

setfsid 1

spinlock 1

epoll.h 1

xattr.h 1

st_atim.tv_nsec 1

package-string: glusterfs 3.7.11

/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb8)[0x3889625a18]

/usr/lib64/libglusterfs.so.0(gf_print_trace+0x32f)[0x38896456af]

/lib64/libc.so.6[0x34a1c32660]

/lib64/libc.so.6[0x34a1d3382f]

/usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(+0x53307)[0x7f8d071b3307]

/usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(exp_file_parse+0x302)[0x7f8d071b3742]

/usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(mnt3_auth_set_exports_auth+0x45)[0x7f8d071b47a5]

/usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(_mnt3_init_auth_params+0x91)[0x7f8d07183e41]

/usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(mnt3svc_init+0x218)[0x7f8d07184228]

/usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(nfs_init_versions+0xd7)[0x7f8d07174a37]

/usr/lib64/glusterfs/3.7.11/xlator/nfs/server.so(init+0x77)[0x7f8d071767c7]

/usr/lib64/libglusterfs.so.0(xlator_init+0x52)[0x3889622a82]

/usr/lib64/libglusterfs.so.0(glusterfs_graph_init+0x31)[0x3889669aa1]

/usr/lib64/libglusterfs.so.0(glusterfs_graph_activate+0x57)[0x3889669bd7]

/usr/sbin/glusterfs(glusterfs_process_volfp+0xed)[0x405c0d]

/usr/sbin/glusterfs(mgmt_getspec_cbk+0x312)[0x40dbd2]

/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x3889e0f7b5]

/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1a1)[0x3889e10891]

/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x3889e0bbd8]

/usr/lib64/glusterfs/3.7.11/rpc-transport/socket.so(+0x94cd)[0x7f8d088e04cd]

/usr/lib64/glusterfs/3.7.11/rpc-transport/socket.so(+0xa79d)[0x7f8d088e179d]

/usr/lib64/libglusterfs.so.0[0x388968b2f0]

/lib64/libpthread.so.0[0x34a2007aa1]

/lib64/libc.so.6(clone+0x6d)[0x34a1ce8aad]






[root@mseas-data2 glusterfs]# gluster volume info

Volume Name: data-volume

Type: Distribute

Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: mseas-data2:/mnt/brick1

Brick2: mseas-data2:/mnt/brick2

Options Reconfigured:

nfs.export-volumes: on

nfs.disable: off

performance.readdir-ahead: on

diagnostics.brick-sys-log-level: WARNING

nfs.exports-auth-enable: on



"/var/lib/glusterd/nfs/exports"

/gdata 172.16.1.0/255.255.255.0(rw)




*What else can we do to identify why this is failing?**
*

**
**

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] gluster stuck when trying to list a successful mount

2017-08-07 Thread Ilan Schwarts
Hi all,
My infrastructure is GlusterFS with 2 Nodes:
L137B-GlusterFS-Node1.L137B-root.com
L137B-GlusterFS-Node2.L137B-root.com


I have followed this guide:
http://www.itzgeek.com/how-tos/linux/centos-how-tos/install-and-configure-glusterfs-on-centos-7-rhel-7.html/2

I have created a device, formatted as ext4, mounted it.
The next step was to install gluster FS.
I installed glusterfs 3.10.3, created a volume and started it
[root@L137B-GlusterFS-Node2 someuser]# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: a606f77b-c0df-427c-99ec-cee98b3ecd71
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: L137B-GlusterFS-Node1.L137B-root.com:/mnt/gluster/gv0
Brick2: L137B-GlusterFS-Node2.L137B-root.com:/mnt/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on


Now when I try to mount to Node1 from Node2, with the command:
mount -t glusterfs L137B-GlusterFS-Node2.L137B-root.com:/gv0 /mnt

L137B-GlusterFS-Node2.L137B-root.com:/gv0 on /mnt type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)



it returns ok, no errors, but when i do: ls -la /mnt the console get stucked.
on dmesg -T i get (from a duplicate session on another console):
[Mon Aug  7 17:05:58 2017] fuse init (API version 7.22)
[Mon Aug  7 17:09:32 2017] INFO: task glusterfsd:3273 blocked for more
than 120 seconds.
[Mon Aug  7 17:09:32 2017] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Mon Aug  7 17:09:32 2017] glusterfsd  D 880092e380b0 0
3273  1 0x0080
[Mon Aug  7 17:09:32 2017]  8800a1e3bb70 0086
880139b7af10 8800a1e3bfd8
[Mon Aug  7 17:09:32 2017]  8800a1e3bfd8 8800a1e3bfd8
880139b7af10 880092e380a8
[Mon Aug  7 17:09:32 2017]  880092e380ac 880139b7af10
 880092e380b0
[Mon Aug  7 17:09:32 2017] Call Trace:
[Mon Aug  7 17:09:32 2017]  []
schedule_preempt_disabled+0x29/0x70
[Mon Aug  7 17:09:32 2017]  []
__mutex_lock_slowpath+0xc5/0x1c0
[Mon Aug  7 17:09:32 2017]  [] ? unlazy_walk+0x87/0x140
[Mon Aug  7 17:09:32 2017]  [] mutex_lock+0x1f/0x2f
[Mon Aug  7 17:09:32 2017]  [] lookup_slow+0x33/0xa7
[Mon Aug  7 17:09:32 2017]  [] link_path_walk+0x80f/0x8b0
[Mon Aug  7 17:09:32 2017]  [] ? __remove_hrtimer+0x46/0xe0
[Mon Aug  7 17:09:32 2017]  [] path_lookupat+0x6b/0x7a0
[Mon Aug  7 17:09:32 2017]  [] ? futex_wait+0x1a3/0x280
[Mon Aug  7 17:09:32 2017]  [] ? kmem_cache_alloc+0x35/0x1e0
[Mon Aug  7 17:09:32 2017]  [] ? getname_flags+0x4f/0x1a0
[Mon Aug  7 17:09:32 2017]  [] filename_lookup+0x2b/0xc0
[Mon Aug  7 17:09:32 2017]  [] user_path_at_empty+0x67/0xc0
[Mon Aug  7 17:09:32 2017]  [] ? futex_wake+0x80/0x160
[Mon Aug  7 17:09:32 2017]  [] user_path_at+0x11/0x20
[Mon Aug  7 17:09:32 2017]  [] vfs_fstatat+0x63/0xc0
[Mon Aug  7 17:09:32 2017]  [] SYSC_newlstat+0x31/0x60
[Mon Aug  7 17:09:32 2017]  [] ? SyS_futex+0x80/0x180
[Mon Aug  7 17:09:32 2017]  [] ?
__audit_syscall_exit+0x1e6/0x280
[Mon Aug  7 17:09:32 2017]  [] SyS_newlstat+0xe/0x10
[Mon Aug  7 17:09:32 2017]  [] system_call_fastpath+0x16/0x1b
[Mon Aug  7 17:11:32 2017] INFO: task glusterfsd:3273 blocked for more
than 120 seconds.
[Mon Aug  7 17:11:32 2017] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Mon Aug  7 17:11:32 2017] glusterfsd  D 880092e380b0 0
3273  1 0x0080
[Mon Aug  7 17:11:32 2017]  8800a1e3bb70 0086
880139b7af10 8800a1e3bfd8
[Mon Aug  7 17:11:32 2017]  8800a1e3bfd8 8800a1e3bfd8
880139b7af10 880092e380a8
[Mon Aug  7 17:11:32 2017]  880092e380ac 880139b7af10
 880092e380b0
[Mon Aug  7 17:11:32 2017] Call Trace:
[Mon Aug  7 17:11:32 2017]  []
schedule_preempt_disabled+0x29/0x70
[Mon Aug  7 17:11:32 2017]  []
__mutex_lock_slowpath+0xc5/0x1c0
[Mon Aug  7 17:11:32 2017]  [] ? unlazy_walk+0x87/0x140
[Mon Aug  7 17:11:32 2017]  [] mutex_lock+0x1f/0x2f
[Mon Aug  7 17:11:32 2017]  [] lookup_slow+0x33/0xa7
[Mon Aug  7 17:11:32 2017]  [] link_path_walk+0x80f/0x8b0
[Mon Aug  7 17:11:32 2017]  [] ? __remove_hrtimer+0x46/0xe0
[Mon Aug  7 17:11:32 2017]  [] path_lookupat+0x6b/0x7a0
[Mon Aug  7 17:11:32 2017]  [] ? futex_wait+0x1a3/0x280
[Mon Aug  7 17:11:32 2017]  [] ? kmem_cache_alloc+0x35/0x1e0
[Mon Aug  7 17:11:32 2017]  [] ? getname_flags+0x4f/0x1a0
[Mon Aug  7 17:11:32 2017]  [] filename_lookup+0x2b/0xc0
[Mon Aug  7 17:11:32 2017]  [] user_path_at_empty+0x67/0xc0
[Mon Aug  7 17:11:32 2017]  [] ? futex_wake+0x80/0x160
[Mon Aug  7 17:11:32 2017]  [] user_path_at+0x11/0x20
[Mon Aug  7 17:11:32 2017]  [] vfs_fstatat+0x63/0xc0
[Mon Aug  7 17:11:32 2017]  [] SYSC_newlstat+0x31/0x60
[Mon Aug  7 17:11:32 2017]  [] ? SyS_futex+0x80/0x180
[Mon Aug  7 17:11:32 2017]  [] ?
__audit_syscall_exit+0x1e6/0x280
[Mon Aug  7 17:11:32 2017]  [] SyS_newlstat+0xe/0x10
[Mon Aug  7 17:11:32 

Re: [Gluster-users] 3.10.4 packages are missing

2017-08-07 Thread Kaleb S. KEITHLEY
On 08/07/2017 04:49 AM, Serkan Çoban wrote:
> Hi,
> 
> I cannot find gluster 3.10.4 packages in centos repos. 3.11 release is
> also nonexistent. Can anyone fix this please?
> ___

I see 3.10.4 RPMs in
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.10/

and 3.11.2 RPMs in
https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-3.11/

buildlogs is somewhat akin to Fedora's Updates-Testing repos. Niels
likes to have testing feedback on packages before he tags them into the
main repo at http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.10/

Also we are apparently still waiting for the CentOS admins to create
http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.11, and at
this rate we should ask them to create
http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.12 as well.

IMO we could tag the latest builds into the main repo after two weeks
even if we receive no feedback, just like Fedora allows.

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Volume hacked

2017-08-07 Thread lemonnierk
> It really depends on the application if locks are used. Most (Linux)
> applications will use advisory locks. This means that locking is only
> effective when all participating applications use and honour the locks.
> If one application uses (advisory) locks, and an other application now,
> well, then all bets are off.
> 
> It is also possible to delete files that are in active use. The contens
> will still be served by the filesystem, but there is no accessible
> filename anymore. If the VMs using those files are still running, there
> might be a way to create a new filename for the data. If the VMs have
> been stopped, and the file-descriptior has been closed, the data will be
> gone :-/
>

Oh the data was gone long before I stopped the VM, every binary was
doing I/O errors when accessed, only whatever was in ram (ssh ..) when
the disk got suppressed was still working.

I'm a bit surpised they could be deleted, but I imagine qemu through
libgfapi doesn't really access the file as a whole, maybe just the part
it needs when it needs it. In any case the gluster logs show clearly
file descriptor errors from 8h47 AM UTC, which seems to match our first
monitoring alerts. I assume that's when the deletion happened.

Now I just need to figure out what they used to access the volume, I
hope it's just NFS since that's the only thing I can think of.


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Volume hacked

2017-08-07 Thread Niels de Vos
On Sun, Aug 06, 2017 at 08:54:33PM +0100, lemonni...@ulrar.net wrote:
> Thinking about it, is it even normal they managed to delete the VM disks?
> Shoudn't they have gotten "file in use" errors ? Or does libgfapi not
> lock the access files ?

It really depends on the application if locks are used. Most (Linux)
applications will use advisory locks. This means that locking is only
effective when all participating applications use and honour the locks.
If one application uses (advisory) locks, and an other application now,
well, then all bets are off.

It is also possible to delete files that are in active use. The contens
will still be served by the filesystem, but there is no accessible
filename anymore. If the VMs using those files are still running, there
might be a way to create a new filename for the data. If the VMs have
been stopped, and the file-descriptior has been closed, the data will be
gone :-/

Niels


> 
> 
> On Sun, Aug 06, 2017 at 03:57:06PM +0100, lemonni...@ulrar.net wrote:
> > Hi,
> > 
> > This morning one of our cluster was hacked, all the VM disks were
> > deleted and a file README.txt was left with inside just
> > "http://virtualisan.net/contactus.php :D"
> > 
> > I don't speak the language but with google translete it looks like it's
> > just a webdev company or something like that, a bit surprised ..
> > In any case, we'd really like to know how that happened.
> > 
> > I realised NFS is accessible by anyone (sigh), is there a way to check
> > if that is what they used ? I tried reading the nfs.log but it's not
> > really clear if someone used it or not. What do I need to look for in
> > there to see if someone mounted the volume ?
> > There are stuff in the log on one of the bricks (only one), 
> > and as we aren't using NFS for that volume that in itself seems
> > suspicious.
> > 
> > Thanks
> 
> 
> 
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> 



> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume hacked

2017-08-07 Thread Amar Tumballi
On Mon, Aug 7, 2017 at 2:17 PM,  wrote:

> On Mon, Aug 07, 2017 at 10:40:08AM +0200, Arman Khalatyan wrote:
> > Interesting problem...
> > Did you considered an insider job?( comes to mind http://verelox.com
> >  recent troubles)
>
> I would be really really surprised, we are only 5 / 6 with access and as
> far as I know no one has a problem with the company.
> The last person to leave did so last year, and we revoked everything (I
> hope). And I can't think of a reason they'd leave the website of a
> hungarian company in there, we contacted them and they think it's one
> of their ex-employee trying to cause them problems.
> I think we were just unlucky, but I'd really love to confirm how they
> did it
>
>
For any filesystem access through GlusterFS, a successful handshake at the
server-side is mandatory.

You should have the log of the clients connected to these server machines
in brick logs (mostly at /var/log/glusterfs/bricks/*.log), check them for
any external IP.

Gluster doesn't provide any extra protection right now, other than what is
provided by POSIX standard (ie, user access control). So, if user is 'root'
in his machine, and there is no_root_squash option, then technically he can
delete all the files in the volume, if he can mount the volume. The major
'authentication' control provided are by IP based authentications.

At this time, if your volume didn't had more granular control on
'auth.allow' options, then we can check the log and try to understand which
client caused this.

Regards,
Amar


>
> > On Mon, Aug 7, 2017 at 3:30 AM, W Kern  wrote:
> >
> > >
> > >
> > > On 8/6/2017 4:57 PM, lemonni...@ulrar.net wrote:
> > >
> > >
> > > Gluster already uses a vlan, the problem is that there is no easy way
> > > that I know of to tell gluster not to listen on an interface, and I
> > > can't not have a public IP on the server. I really wish ther was a
> > > simple "listen only on this IP/interface" option for this
> > >
> > >
> > > What about this?
> > >
> > > transport.socket.bind-address
> > >
> > > I know the were some BZs on it with earlier Gluster Versions, so I
> assume its still there now.
> > >
> > > -bill
> > >
> > >
> > >
> > >
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > >
>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quotas not working after adding arbiter brick to replica 2

2017-08-07 Thread Sanoj Unnikrishnan
It will be fixed in 3.11

On Fri, Aug 4, 2017 at 7:30 PM, mabi  wrote:

> Thank you very much Sanoj, I ran your script once and it worked. I now
> have quotas again...
>
> Question: do you know in which release this issue will be fixed?
>
>
>
>  Original Message 
> Subject: Re: [Gluster-users] Quotas not working after adding arbiter brick
> to replica 2
> Local Time: August 4, 2017 3:28 PM
> UTC Time: August 4, 2017 1:28 PM
> From: sunni...@redhat.com
> To: mabi 
> Gluster Users 
>
> Hi mabi,
> This is a likely issue where the last gfid entry in the quota.conf file is
> stale (because the directory was deleted with quota limit on it being
> removed)
> (https://review.gluster.org/#/c/16507/)
>
> To fix the issue, we need to remove the last entry (last 17 bytes/ 16bytes
> based on quota version) in the file.
> Please use the below work around for the same until next upgrade.
> you only need to change $vol to the name of volume.
>
> ===
> vol=
> qconf=/var/lib/glusterd/vols/$vol/quota.conf
> qconf_bk="$qconf".bk
> cp $qconf $qconf_bk
>
> grep "GlusterFS Quota conf | version: v1.2" /var/lib/glusterd/vols/v5/
> quota.conf
> if [ $? -eq 0 ];
> then
> entry_size=17;
> else
> entry_size=16;
> fi
>
> size=`ls -l $qconf | awk '{print $5}'`
> (( size_new = size - entry_size ))
> dd if=$qconf_bk of=$qconf bs=1 count=$size_new
> gluster v quota v5 list
> 
> In the unlikely case that there are multiple stale entries in the end of
> file you may have to run it multiple times
> to fix the issue (each time one stale entry at the end is removed)
>
>
> On Thu, Aug 3, 2017 at 1:17 PM, mabi  wrote:
>
>> I tried to re-create manually my quotas but not even that works now.
>> Running the "limit-usage" command as showed below returns success:
>>
>> $ sudo gluster volume quota myvolume limit-usage /userdirectory 50GB
>> volume quota : success
>>
>>
>>
>> but when I list the quotas using "list" nothing appears.
>>
>> What can I do to fix that issue with the quotas?
>>
>>  Original Message 
>> Subject: Re: [Gluster-users] Quotas not working after adding arbiter
>> brick to replica 2
>> Local Time: August 2, 2017 2:35 PM
>> UTC Time: August 2, 2017 12:35 PM
>> From: m...@protonmail.ch
>> To: Sanoj Unnikrishnan 
>> Gluster Users 
>>
>> Hi Sanoj,
>>
>> I copied over the quota.conf file from the affected volume (node 1) and
>> opened it up with a hex editor but can not recognize anything really except
>> for the first few header/version bytes. I have attached it within this mail
>> (compressed with bzip2) as requested.
>>
>> Should I recreate them manually? there where around 10 of them. Or is
>> there a hope of recovering these quotas?
>>
>> Regards,
>> M.
>>
>>
>>
>>  Original Message 
>> Subject: Re: [Gluster-users] Quotas not working after adding arbiter
>> brick to replica 2
>> Local Time: August 2, 2017 1:06 PM
>> UTC Time: August 2, 2017 11:06 AM
>> From: sunni...@redhat.com
>> To: mabi 
>> Gluster Users 
>>
>> Mabi,
>>
>> We have fixed a couple of issues in the quota list path.
>> Could you also please attach the quota.conf file
>> (/var/lib/glusterd/vols/patchy/quota.conf)
>> (Ideally, the first few bytes would be ascii characters followed by 17
>> bytes per directory on which quota limit is set)
>> Regards,
>> Sanoj
>>
>> On Tue, Aug 1, 2017 at 1:36 PM, mabi  wrote:
>>
>>> I also just noticed quite a few of the following warning messages in the
>>> quotad.log log file:
>>>
>>> [2017-08-01 07:59:27.834202] W [MSGID: 108027]
>>> [afr-common.c:2496:afr_discover_done] 0-myvolume-replicate-0: no read
>>> subvols for (null)
>>>
>>>
>>>
>>>
>>>  Original Message 
>>> Subject: [Gluster-users] Quotas not working after adding arbiter brick
>>> to replica 2
>>> Local Time: August 1, 2017 8:49 AM
>>> UTC Time: August 1, 2017 6:49 AM
>>> From: m...@protonmail.ch
>>> To: Gluster Users 
>>>
>>> Hello,
>>>
>>> As you might have read in my previous post on the mailing list I have
>>> added an arbiter node to my GlusterFS 3.8.11 replica 2 volume. After some
>>> healing issues and help of Ravi that could get fixed but now I just noticed
>>> that my quotas are all gone.
>>>
>>> When I run the following command:
>>>
>>> glusterfs volume quota myvolume list
>>>
>>> There is no output...
>>>
>>> In the /var/log/glusterfs/quotad.log I can see the following two lines
>>> when running the list command:
>>>
>>> [2017-08-01 06:46:04.451765] W [dict.c:581:dict_unref]
>>> (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.8.11/xlator/features/quotad.so(+0x1f3d)
>>> [0x7fe868e21f3d] -->/usr/lib/x86_64-linux-gnu/g
>>> lusterfs/3.8.11/xlator/features/quotad.so(+0x2d82) [0x7fe868e22d82]
>>> 

[Gluster-users] 3.10.4 packages are missing

2017-08-07 Thread Serkan Çoban
Hi,

I cannot find gluster 3.10.4 packages in centos repos. 3.11 release is
also nonexistent. Can anyone fix this please?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume hacked

2017-08-07 Thread lemonnierk
On Mon, Aug 07, 2017 at 10:40:08AM +0200, Arman Khalatyan wrote:
> Interesting problem...
> Did you considered an insider job?( comes to mind http://verelox.com
>  recent troubles)

I would be really really surprised, we are only 5 / 6 with access and as
far as I know no one has a problem with the company.
The last person to leave did so last year, and we revoked everything (I
hope). And I can't think of a reason they'd leave the website of a
hungarian company in there, we contacted them and they think it's one
of their ex-employee trying to cause them problems.
I think we were just unlucky, but I'd really love to confirm how they
did it

> 
> On Mon, Aug 7, 2017 at 3:30 AM, W Kern  wrote:
> 
> >
> >
> > On 8/6/2017 4:57 PM, lemonni...@ulrar.net wrote:
> >
> >
> > Gluster already uses a vlan, the problem is that there is no easy way
> > that I know of to tell gluster not to listen on an interface, and I
> > can't not have a public IP on the server. I really wish ther was a
> > simple "listen only on this IP/interface" option for this
> >
> >
> > What about this?
> >
> > transport.socket.bind-address
> >
> > I know the were some BZs on it with earlier Gluster Versions, so I assume 
> > its still there now.
> >
> > -bill
> >
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Volume hacked

2017-08-07 Thread Arman Khalatyan
Interesting problem...
Did you considered an insider job?( comes to mind http://verelox.com
 recent troubles)

On Mon, Aug 7, 2017 at 3:30 AM, W Kern  wrote:

>
>
> On 8/6/2017 4:57 PM, lemonni...@ulrar.net wrote:
>
>
> Gluster already uses a vlan, the problem is that there is no easy way
> that I know of to tell gluster not to listen on an interface, and I
> can't not have a public IP on the server. I really wish ther was a
> simple "listen only on this IP/interface" option for this
>
>
> What about this?
>
> transport.socket.bind-address
>
> I know the were some BZs on it with earlier Gluster Versions, so I assume its 
> still there now.
>
> -bill
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users