[Gluster-users] issues with replicating data to a new brick

2018-04-12 Thread Bernhard Dübi
Hello everybody, I have some kind of a situation here I want to move some volumes to new hosts. the idea is to add the new bricks to the volume, sync and then drop the old bricks. starting point is: Volume Name: Server_Monthly_02 Type: Replicate Volume ID: 0ada8e12-15f7-42e9-9da3-2734b04e04e9

Re: [Gluster-users] Rebooting cluster nodes - GFS3.8

2017-12-05 Thread Bernhard Dübi
Hi, just wanted to write the same thing. there was once a post that suggested to kill the gluster processes manually but I guess rebooting the machine will do the same. the clients will stall for a while and then continue do access the volume from the remaining node. it is very important that you

[Gluster-users] move brick to new location

2017-11-28 Thread Bernhard Dübi
Hello everybody, we have a number of "replica 3 arbiter 1" or (2 + 1) volumes because we're running out of space on some volumes I need to optimize the usage of the physical disks. that means I want to consolidate volumes with low usage onto the same physical disk. I can do it with "replace-brick

Re: [Gluster-users] nfs-ganesha locking problems

2017-10-02 Thread Bernhard Dübi
Hi Soumya, what I can say so far: it is working on a standalone system but not on the clustered system from reading the ganesha wiki I have the impression that it is possible to change the log level without restarting ganesha. I was playing with dbus-send but so far was unsuccessful. if you can

[Gluster-users] nfs-ganesha locking problems

2017-09-29 Thread Bernhard Dübi
Hi, I have a problem with nfs-ganesha serving gluster volumes I can read and write files but then one of the DBAs tried to dump an Oracle DB onto the NFS share and got the following errors: Export: Release 11.2.0.4.0 - Production on Wed Sep 27 23:27:48 2017 Copyright (c) 1982, 2011, Oracle

Re: [Gluster-users] Bug 1374166 or similar

2017-07-18 Thread Bernhard Dübi
Hi Jiffin, thank you for the explanation Kind Regards Bernhard 2017-07-18 8:53 GMT+02:00 Jiffin Tony Thottan <jthot...@redhat.com>: > > > On 16/07/17 20:11, Bernhard Dübi wrote: >> >> Hi, >> >> both Gluster servers were rebooted and now the unlink director

Re: [Gluster-users] Bug 1374166 or similar

2017-07-16 Thread Bernhard Dübi
Hi, both Gluster servers were rebooted and now the unlink directory is clean. Best Regards Bernhard 2017-07-14 12:43 GMT+02:00 Bernhard Dübi <1linuxengin...@gmail.com>: > Hi, > > yes, I mounted the Gluster volume and deleted the files from the > volume not the brick >

Re: [Gluster-users] Bug 1374166 or similar

2017-07-14 Thread Bernhard Dübi
configuration Best Regards Bernhard 2017-07-14 10:43 GMT+02:00 Jiffin Tony Thottan <jthot...@redhat.com>: > > > On 14/07/17 13:06, Bernhard Dübi wrote: >> >> Hello everybody, >> >> I'm in a similar situation as described in >> https://bugzilla.redhat.co

[Gluster-users] Bug 1374166 or similar

2017-07-14 Thread Bernhard Dübi
Hello everybody, I'm in a similar situation as described in https://bugzilla.redhat.com/show_bug.cgi?id=1374166 I have a gluster volume exported through ganesha. we had some problems on the gluster server and the NFS mount on the client was hanging. I did a lazy umount of the NFS mount on the

Re: [Gluster-users] total outage - almost

2017-06-19 Thread Bernhard Dübi
:51 GMT+02:00 Bernhard Dübi <1linuxengin...@gmail.com>: > Hi, > > I checked the attributes of one of the files with I/O errors > > root@chastcvtprd04:~# getfattr -d -e hex -m - > /data/glusterfs/Server_Standard/1I-1-14/brick/Server_Standard/CV_MAGNETIC/V_1050932/CHUNK_1112

Re: [Gluster-users] total outage - almost

2017-06-19 Thread Bernhard Dübi
=0x011300ee3e3ac6a79b8efc42d0904ca431cb20d01890d300c041e905d9d78a562bf276 trusted.bit-rot.version=0x13005841b921000c222f trusted.gfid=0x1427a79086f14ed2902e3c18e133d02b the "dirty" is 0, that's good, isn't it? what's the "trusted.bit-rot.bad-file=0x3100" information? Best Regards B

[Gluster-users] total outage - almost

2017-06-19 Thread Bernhard Dübi
Hi, we use a bunch of replicated gluster volumes as a backend for our backup. Yesterday I noticed that some synthetic backups failed because of I/O errors. Today I ran "find /gluster_vol -type f | xargs md5sum" and got loads of I/O errors. The brick log file shows the below errors [2017-06-19

[Gluster-users] ganesha.nfsd: `NTIRPC_1.4.3' not found

2017-05-20 Thread Bernhard Dübi
Hi, is this list also dealing with nfs-ganesha problems? I just ran a dist-upgrade on my Ubuntu 16.04 machine and now nfs-ganesha doesn't start anymore May 20 10:00:15 chastcvtprd03 bash[5720]: /usr/bin/ganesha.nfsd: /lib/x86_64-linux-gnu/libntirpc.so.1.4: version `NTIRPC_1.4.3' not found

Re: [Gluster-users] unlimited memory usage

2017-02-13 Thread Bernhard Dübi
Hi, one more question: when I can convince my boss to buy another machine to separate the load of Gluster and Backup onto different machines, will this solve my problem or will the Gluster client also eat up all memory it can get? Best Regards Bernhard 2017-02-13 21:53 GMT+01:00 Bernhard Dübi

[Gluster-users] unlimited memory usage

2017-02-13 Thread Bernhard Dübi
Hi, I'm running Gluster 3.8.8 on Ubuntu 16.04 on 2 HP Apollo 4510 with 60 x 8TB each The machines are used as Backup Media Agents for CommVault Simpana V11 I was running this combination since Gluster 3.7. Lately I noticed that Gluster is using almost all available memory, starving the other