Hi,
My volume home is configured in replicate mode (version 3.12.4) with the bricks
server1:/data/gluster/brick1
server2:/data/gluster/brick1
server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that
brick on server2, umounted it, reformated it, remounted it and did a
> gl
, then gluster v start volname force.
> To start self heal just run gluster v heal volname full.
>
> On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe
wrote:
> > Hi,
> >
> >
> > My volume home is configured in replicate mode (version 3.12.4) with the
> >
alance: home: success
Thanks,
A.
On Thursday, 1 February 2018 18:57:17 CET Serkan Çoban wrote:
> What is server4? You just mentioned server1 and server2 previously.
> Can you post the output of gluster v status volname
>
> On Thu, Feb 1, 2018 at 8:13 PM, Alessandro Ipe
Hi,
Our gluster system is currently made of 4 replicated pairs of servers, holding
either 2 or 4 bricks of 4
HDs in RAID 10.
We have a bunch of clients which are mounting the system through the native
fuse glusterfs client, more
specifically there are all using the same server1 to get the c
Hi,
Apparently, version 3.6.9 is suffering from a SERIOUS memory leak as
illustrated in the following logs:
2016-04-26T11:54:27.971564+00:00 tsunami1 kernel: [698635.210069] glusterfsd
invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0
2016-04-26T11:54:27.974133+00:00 tsunami1 kerne
OK, great... Any plan to backport those important fixes to the 3.6 branch ?
Because, I am not reading to upgrade to the 3.7 branch for a production system.
My
fears is that 3.7 will bring other new issues and all I want is a stable and
reliable
branch without extra new functionalities (and new
Hi,
We have set up a "md1" volume using gluster 3.4.2 over 4 servers configured as
distributed and replicated. Then, we upgraded smoohtly to 3.5.3, since it was
mentionned that the command "volume replace-brick" is broken on 3.4.x. We added
two more peers (after having read that the quota feat
a workaround for it.
A.
On Wednesday 07 January 2015 15:41:07 Atin Mukherjee wrote:
> On 01/06/2015 06:05 PM, Alessandro Ipe wrote:
> > Hi,
> >
> >
> > We have set up a "md1" volume using gluster 3.4.2 over 4 servers
> > configured as distributed an
ee wrote:
> On 01/07/2015 05:09 PM, Alessandro Ipe wrote:
> > Hi,
> >
> >
> > The corresponding logs in
> > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log (OS is openSuSE 12.3)
> > [2015-01-06 12:32:14.596601] I
> > [glusterd-replace-brick.c:98:__
18:06:44 Atin Mukherjee wrote:
> On 01/07/2015 05:09 PM, Alessandro Ipe wrote:
> > Hi,
> >
> >
> > The corresponding logs in
> > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log (OS is openSuSE 12.3)
> > [2015-01-06 12:32:14.596
Hi,
We have a "md1" volume under gluster 3.5.3 over 6 servers configured as
distributed and replicated. When trying on a client, thourgh fuse mount (which
turns out to be also a brick server) to delete (as root) recursively a
directory with "rm -rf /home/.md1/linux/suse/12.1", I get the error
:/data/glusterfs/md1/brick1/
Number of entries: 0
Brick tsunami6.oma.be:/data/glusterfs/md1/brick1/
Number of entries: 0
Should I run "gluster volume heal md1 full" ?
Thanks,
A.
On Monday 23 February 2015 18:12:43 Ravishankar N wrote:
On 02/23/2015 05:42 PM, Ales
/i586
Thanks,
A.
On Monday 23 February 2015 20:06:17 Ravishankar N wrote:
On 02/23/2015 07:04 PM, Alessandro Ipe wrote:
Hi Ravi,
gluster volume status md1 returns
Status of volume: md1
Gluster process Port Online Pid
time back.FWIW, can you check if " /linux/suse/12.1/KDE4.7.4/i586" on
all 6
bricks is indeed empty?
On 02/23/2015 08:15 PM, Alessandro Ipe wrote:
Hi,
Gluster version is 3.5.3-1.
/var/log/gluster.log (client
uot; files ?
Thanks,
A.
On Monday 23 February 2015 21:40:41 Ravishankar N wrote:
On 02/23/2015 09:19 PM, Alessandro Ipe wrote:
On 4 of the 6 bricks, it is empty. However,on tsunami 3-4, ls -lsa
gives
total 16
d- 2 root
Hi,
I launched a couple a days ago a rebalance on my gluster distribute-replicate
volume
(see below) through its CLI, while allowing my users to continue using the
volume.
Yesterday, they managed to fill completely the volume. It now results in
unavailable
files on the client (using fuse) w
Hi,
When trying to access a file on a gluster client (through fuse), I get an
"Input/output error" message.
Getting the attributes for the file gives me for the first brick
# file: data/glusterfs/md1/brick1/kvm/hail/hail_home.qcow2
trusted.afr.md1-client-2=0s
trusted.afr.md1-cli
Well, it is even worse. Now when doing a "ls -R" on the volume results in a
lot of
[2015-03-11 11:18:31.957505] E
[afr-self-heal-common.c:233:afr_sh_print_split_brain_log] 0-md1-replicate-2:
Unable to self-heal contents of '/library' (possible split-brain). Please
delete the file from all bu
t clear in the doc.
-Krutika
--------
*From: *"Alessandro Ipe"
*To: *gluster-users@gluster.org
*Sent: *Wednesday, March 11, 2015 4:54:09 PM
*Subject: *Re: [Gluster-users] Input/output error when trying to access a file
on client
Well, it is even worse. Now when d
between 3-4 and 5-6 (replicate
pairs).
A.
On Thursday 12 March 2015 11:33:00 Alessandro Ipe wrote:
Hi,
"gluster volume heal md1 info split-brain" returns approximatively 2000 files
(already
divided by 2
due to replicate volume). So manually repairing each split-brain is
rote:
Hi,
Could you provide the xattrs in hex format?
You can execute `getfattr -d -m . -e hex
`
-Krutika
*From: *"Alessandro Ipe"
*To: *"Krutika Dhananjay"
*Cc: *gluster-users@gluster.org
*Sent: *Thursday, March 12, 2015 5:15:08 PM
*Subject: *Re
11, 2015 4:24:09 AM PDT, Alessandro Ipe
wrote:
Well, it is even worse. Now when doing a "ls -R" on the volume results in a
lot of
[2015-03-11 11:18:31.957505] E [afr-self-heal-
common.c:233:afr_sh_print_split_brain_log] 0-md1-replicate-2: Unable to
self-heal
contents of '/lib
luctant to
perform
something to heal by myself, because I have the feeling that it could do more
harm
than good.
It's been more than 2 days now that my colleagues cannot access the data and I
cannot make them wait much longer...
A.
On Thursday 12 March 2015 12:59:00 Alessandro Ipe wr
butes, "getfattr -m
. -d"
On March 10, 2015 7:30:33 AM PDT, Alessandro Ipe
wrote:
Hi,
I launched a couple a days ago a rebalance on my gluster distribute-replicate
volume
(see below) through its CLI, while allowing my users to continue using the
volume.
Yesterday, they managed to fill
Hi,
Apparently, this occured after a failed rebalance due to exhaution of
available disk space on the bricks.
On the client, an ls on the directory gives
ls: cannot access .inputrc: No such file or directory
and displays
?? ? ?? ?? .inputrc
Getting the attribut
Hi,
After lauching a "rebalance" on an idle gluster system one week ago, its status
told me it has scanned
more than 23 millions files on each of my 6 bricks. However, without knowing
at least the total files to
be scanned, this status is USELESS from an end-user perspective, because it
does
Hi Olav,
Thanks for the info. I read the whole thread that you sent me... and I am more
scared
than ever... The fact that the developers do not have a clue of what is causing
this
issue is just frightening.
Concerning my issue, apparently after two days (a full heal is ongoing on the
volume
,
> testing gluster under high load on the
>
> brick servers (real world conditions) would certainly gives insight to the
> developpers on what it failing
>
> and what needs therefore to be fixed to mitigate this and improve gluster
> reliability.
>
>
>
> Forgive
> 3.5.3 and 3.6.3 but I will confirm.
>
> Regards,
> Nithya
>
> - Original Message -
> From: "Alessandro Ipe"
> To: "Nithya Balachandran"
> Cc: gluster-users@gluster.org
> Sent: Wednesday, 25 March, 2015 5:42:02 PM
> Subject: Re: [Gluster-users] Is
Hi,
We have set up a "home" volume using gluster 3.4.2 over 4 servers configured as
distributed and replicated. On each server, 4 ext4 bricks are mounted with the
following options:
defaults,noatime,nodiratime
This "home" volume is mounted using FUSE on a client server with the following
opti
30 matches
Mail list logo