Hello all,
we have a problem on a geo-replicated volume after upgrade from
glusterfs 3.3.2 to 3.4.6 on ubuntu 12.04.5 lts.
for e.g. a 'ls -l' on the mounted geo-replicated volume does not show
the entire content while the same command on the underlying bricks shows
the entire content.
the
ab5d4bd671.xtime=0x54bff5c40008dd7f
-
...
putz@sdn-de-gate-01:~/central$
--
Dietmar Putz
3Q Medien GmbH
Wetzlarer Str. 86
D-14482 Potsdam
Telefax: +49 (0)331 / 2797 866 - 1
Telefon: +49 (0)331 / 2792 866 - 8
Mobile: +49 171 / 90 160 39
Mail: p...@3qmedien.net
__
e56f774df5
On 11/11/2015 3:20 AM, Dietmar Putz wrote:
Hi all,
i need some help with a geo-replication issue...
recently i upgraded two 6-node distributed-replicated gluster from
ubuntu 12.04.5 lts to 14.04.3 lts resp. glusterfs 3.4.7 to 3.5.6
since then the geo-replication does not start syncing b
ien-02 gluster
--xml --remote-host=localhost volume info aut-wien-01" returned with
255, saying:
[2015-11-09 12:45:40.61755] E [resource(monitor):207:logerr] Popen: ssh>
ssh: connect to host gluster-wien-02 port 2503: Connection timed out
[2015-11-09 12:45:40.62242] I [syncdutils(m
fuse: FUSE inited with protocol versions: glusterfs 7.22
kernel 7.22
[ 16:13:28 ] - root@gluster-ger-ber-09
/var/log/glusterfs/geo-replication/ger-ber-01 $
Am 12.11.2015 um 13:23 schrieb Dietmar Putz:
Hello Aravinda,
thank you for replay...
answers inside...
best regards
dietmar
Am 12.11.201
014
/gluster-export/thumbs/2014/2485/272648/rfvg2cmFNJ8Xt9HG.png
ls-lisa-gluster.gluster-wien-05.out:82278502871 612 -rw-rw-rw- 2 root
root 619294 Oct 18 2014
/gluster-export/thumbs/2014/2485/272648/rfvg2cmFNJ8Xt9HG.png
tron@dp-server:~/gluster-9$
--
Dietmar Putz
3Q Medien GmbH
Wetzlarer Str. 8
Hello all,
on 1st december i upgraded two 6 node cluster from glusterfs 3.5.6 to 3.6.7.
all of them are equal in hw, os and patchlevel, currently running ubuntu
14.04 lts by an do-release-upgrade from 12.04 lts (this was done before
gfs upgrade to 3.5.6, not directly before upgrading to
glusterfs and .trashcan directory)
regards
Aravinda
On 01/02/2016 09:56 PM, Dietmar Putz wrote:
Hello all,
one more time i need some help with a geo-replication problem.
recently i started a new geo-replication. the master volume contains
about 45 TB data and the slave volume was new created befor
Hello all,
one more time i need some help with a geo-replication problem.
recently i started a new geo-replication. the master volume contains
about 45 TB data and the slave volume was new created before
geo-replication setup was done.
master and slave is a 6 node distributed replicated volume
um 08:08 schrieb Saravanakumar Arumugam:
Hi,
Replies inline..
Thanks,
Saravana
On 12/18/2015 10:02 PM, Dietmar Putz wrote:
Hello again...
after having some big trouble with an xfs issue in kernel 3.13.0-x
and 3.19.0-39 which has been 'solved' by downgrading to 3.8.4
(http://comm
er-wien-02 the trusted.gfid was
missing on four nodes but at least on the remaining two nodes the gfid
for 1050 was the same like on the master volume.
I'll try it again on wien-02..
best regards
dietmar
Am 22.12.2015 um 11:47 schrieb Dietmar Putz:
Hi Saravana,
thanks for your reply...
all gluster-nod
lpful...
but do i have to execute above shown setfattr commands on the master or
do they just speed up synchronization ?
usually sync should start automatically or could there be a problem
because crawl status is still in 'hybrid crawl'...?
thanks in advance...
best regards
dietmar
On 04.
Hi all,
once again i need some help to get our geo-replication running again...
master and slave are 6-node distributed replicated volumes running
ubuntu 14.04 and glusterfs 3.7.6 from the ubuntu ppa.
the master volume already contains about 45 TByte of data, the slave
volume was created from
to mention that i always followed the 'scheduling a downtime'
part like in :
http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5
hope that helps...
best regards
dietmar
On 11.02.2016 23:05, Dave Warren wrote:
On 2016-02-11 04:27, Dietmar Putz wrote:
and i strongly believe
pretty sure that you
will be faced to the problem that clients cannot mount the volume.
( solved by glusterd --xlator-option *.upgrade=on -N )
https://bugzilla.redhat.com/show_bug.cgi?id=1191176
br
dietmar
Am 12.02.2016 um 14:26 schrieb Dietmar Putz:
Hi Dave,
based on my experience i would expect
Hi Dave,
first of all I'm not a developer, just a user as you and recently i had
a gluster update ( 6 bricks in distributed replicated conf + the same as
slave for the geo-replication)
on ubuntu from 12.04 LTS / GFS 3.4.7 to 14.04 LTS - 3.5.x - 3.6.7 - 3.7.6.
most problem i got is/was
s Hybrid Crawl is not an entirely bad thing. It just could mean
that there's a lot of entries that are being processed. However, if things don't
return back to the normal state after trying out the alternative suggestion, we
could take a look at the strace output and get some clues.
--
Milind
-
on changelog replay --
http://review.gluster.org/13189
I'll post info about the fix propagation plan for the 3.6.x series later.
--
Milind
On Wed, Jan 20, 2016 at 11:23 PM, Dietmar Putz <p...@3qmedien.net
<mailto:p...@3qmedien.net>> wrote:
Hi Milind,
thank you for your r
-users
--
Dietmar Putz
3Q Medien GmbH
Wetzlarer Str. 86
D-14482 Potsdam
Telefax: +49 (0)331 / 2797 866 - 1
Telefon: +49 (0)331 / 2792 866 - 8
Mobile: +49 171 / 90 160 39
Mail: p...@3qmedien.net
___
Gluster-users mailing list
Gluster-users@glust
e(/brick1/mvol1):238:logerr]
Popen: ssh> ssh: connect to host gl-slave-01-int port 22: Connection refused
Somehow it looks like port 22 is hard coded...
Does anybody know how to successfully change the ssh port for a
geo-replication session...?
any hint would be appreciated...
best regards
di
Hello Anoop,
thank you for your reply
answers inside...
best regards
Dietmar
On 29.06.2017 10:48, Anoop C S wrote:
On Wed, 2017-06-28 at 14:42 +0200, Dietmar Putz wrote:
Hello,
recently we had two times a partial gluster outage followed by a total
outage of all four nodes. Looking
rash/_usr_sbin_glusterfsd.0.crash
-----
--
Dietmar Putz
3Q GmbH
Wetzlarer Str. 86
D-14482 Potsdam
Telefax: +49 (0)331 / 2797 866 - 1
Telefon: +49 (0)331 / 2797 866 - 8
Mobile: +49 171 / 90 160 39
Mail:
Dear All,
we are running a dist. repl. volume on 4 nodes including geo-replication
to another location.
the geo-replication was running fine for months.
since 18th jan. the geo-replication is faulty. the geo-rep log on the
master shows following error in a loop while the logs on the slave
.
-Kotresh HR
On Wed, Jan 24, 2018 at 10:29 PM, Dietmar Putz <dietmar.p...@3qsdn.com
<mailto:dietmar.p...@3qsdn.com>> wrote:
Hi all,
i have made some tests on the latest Ubuntu 16.04.3 server image.
Upgrades were disabled...
the configuration was always the same...a
:47 +++ exited with 3 +++
Am 19.01.2018 um 17:27 schrieb Joe Julian:
ubuntu 16.04
--
Dietmar Putz
3Q GmbH
Kurfürstendamm 102
D-10711 Berlin
Mobile: +49 171 / 90 160 39
Mail: dietmar.p...@3qsdn.com
___
Gluster-users mailing list
Gluster
um 14:06 schrieb Dietmar Putz:
Hi Kotresh,
thanks for your response...
i have made further tests based on ubuntu 16.04.3 (latest upgrades)
and gfs 3.12.5 with following rsync version :
1. ii rsync 3.1.1-3ubuntu1
2. ii rsync 3.1.1
limitation of the trashcan is obsolete i'm really
interested to use the trashcan feature but i'm concerned it will
interrupt the geo-replication entirely.
does anybody else have been faced with this situation...any hints,
workarounds... ?
best regards
Dietmar Putz
root@gl-node1:~/tmp# gluster
-
Host : gl-node3
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
-
Host : gl-node4
brick1/mvol1/.trashcan/test 0x4f59c0686c7740f2b556aa761834caf1
-----
tron@dp-server:~/central$
Hi Christos,
few month ago i had a similar problem but on ubuntu 16.04. At that time
Kotresh gave me a hint :
https://www.spinics.net/lists/gluster-users/msg33694.html
gluster volume geo-replication ::
config access_mount true
this hint solved my problem on ubuntu 16.04. hope that
Hi Shwetha,
thank you for your reply...
I ran a few tests in Debug Mode and found no real indication of the
problem. After each start of the geo-replication some files are
transferred at the beginning and then no further transfers.
Few minutes after start the amount of changelog files in
Hi,
I'm having a problem with geo-replication. A short summary...
About two month ago I have added two further nodes to a distributed
replicated volume. For that purpose I have stopped the geo-replication,
added two nodes on mvol and svol and started a rebalance process on both
sides. Once
Regards,
Felix
On 03/03/2021 17:28, Dietmar Putz wrote:
Hi,
I'm having a problem with geo-replication. A short summary...
About two month ago I have added two further nodes to a distributed
replicated volume. For that purpose I have stopped the
geo-replication, added two nodes on mvol and svol a
Hi Andreas,
recently i have been faced with the same fault. I'm pretty sure you are
speaking german, that's why a translation should not be necessary.
I found the reason by tracing a certain process which points to the
gsyncd.log and looking backward from the error until i found some
33 matches
Mail list logo