Hi All,
I’ve a three nodes replica 3 cluster.
A network split happened which marked one of the three nodes offline on two
nodes. And this very node set itself as RO.
After the network split was fixed, the who cluster became healthy again, and
all the three peers status is connected on the three
I’ve already done that in the previous e-mail.
— Bishoy
> On Jul 28, 2016, at 3:20 PM, Gmail <b.s.mikh...@gmail.com> wrote:
>
>
>> On Jul 27, 2016, at 4:54 PM, Pranith Kumar Karampuri <pkara...@redhat.com
>> <mailto:pkara...@redhat.com>> wrote:
>>
glusterdump.glusterd.dump.1469742056
Description: Binary data
glusterdump.glusterfs.dump.1469742040
Description: Binary data
On Jul 27, 2016, at 4:54 PM, Pranith Kumar Karampuri <pkara...@redhat.com> wrote:On Wed, Jul 27, 2016 at 11:26 PM, Gmail <b.s.mikh...@gmail.com> wrote:Hell
Hello All,
I keep getting the following errors accompanied by the bricks in the same
replica group going offline when trying to sync files between two volumes
located in two different geographic locations using Csync2.
[2016-07-27 17:07:06.701575] E [MSGID: 113015] [posix.c:1011:posix_opendir]
I’ve noticed Gluster 3.7.11 keeps dumping warning in the bricks logs regarding
quota.
once I setup a quota, I see the following error infinitely being dumped:
# tail -f /var/log/glusterfs/bricks/
[2016-06-29 21:54:42.281378] W [MSGID: 120020] [quota.c:2204:quota_unlink_cbk]
I’ve noticed a weird problem after I upgraded to Gluster 3.7.11
Gluster mounts itself via FUSE
# df -h
localhost:/gvol00111T 2.2T 8.4T 21% /gvol001
localhost:gvol001 11T 2.2T 8.4T 21%
/var/run/gluster/gvol001
Why Gluster is doing this?!
—
> On Jun 1, 2016, at 1:41 PM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>
> Il 01 giu 2016 22:34, "Gmail" <b.s.mikh...@gmail.com
> <mailto:b.s.mikh...@gmail.com>> ha scritto:
> >
> >
> >> On Jun 1, 2
> On Jun 1, 2016, at 1:25 PM, Gandalf Corvotempesta
> <gandalf.corvotempe...@gmail.com> wrote:
>
> Il 01 giu 2016 22:06, "Gmail" <b.s.mikh...@gmail.com
> <mailto:b.s.mikh...@gmail.com>> ha scritto:
> > stat() on NFS, is just a single stat() f
Find my answer inline.
> On Jun 1, 2016, at 12:30 PM, Gandalf Corvotempesta
> wrote:
>
> Il 28/05/2016 11:46, Gandalf Corvotempesta ha scritto:
>>
>> if i remember properly, each stat() on a file needs to be sent to all host
>> in replica to check if are in
I’ve tried more than one volume on the same Zpool, but with separate ZFS share
for every volume. I didn’t find any performance issues compared to XFS on LVM.
PS: ZFS by default stores the extended attributes in a hidden directory instead
of extending the file inode size like what XFS do!
er/.ssh/authorized_keys Please delete extra lines which does not
> start with "command=". Then stop and start the Geo-replication.
> regards
> Aravinda
> On 03/31/2016 04:00 AM, Gmail wrote:
>> I’ve rebuilt the cluster again, making a fresh installation. And now the
ing.
[2016-03-30 22:09:42.663024] I [monitor(monitor):274:monitor] Monitor:
worker(/gpool/brick03/geotest) died before establishing connection
—Bishoy
> On Mar 30, 2016, at 10:50 AM, Gmail <b.s.mikh...@gmail.com> wrote:
>
> I’ve tried changing the permissions to 777 on /var/log/
itor:
worker(/mnt/brick10/xfsvol2) died before establishing connection
—Bishoy
> On Mar 29, 2016, at 1:05 AM, Aravinda <avish...@redhat.com> wrote:
>
> Geo-replication command should be run as privileged user itself.
>
> gluster volume geo-replication @ start
>
> and th
.
>
> On Tue, Mar 29, 2016 at 10:21 AM, Gmail <b.s.mikh...@gmail.com> wrote:
>> I’ve been trying to setup geo-replication using Gluster 3.7.3 on OEL 6.5
>> It keeps giving me faulty session.
>> I’ve tried to use root user instead, it works fine!
>>
>> I
I’ve been trying to setup geo-replication using Gluster 3.7.3 on OEL 6.5
It keeps giving me faulty session.
I’ve tried to use root user instead, it works fine!
I’ve followed literally the documentation but no luck getting the unprivileged
user working.
I’ve tried running
I’ve tried symlinks and it’s still not working.
ln -s /bin/dbus-* /usr/bin/
—Bishoy
> On Mar 4, 2016, at 3:20 PM, Gmail <b.s.mikh...@gmail.com> wrote:
>
> That’s where I got it from:
>
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/epel-6.
That’s where I got it from:
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/epel-6.5/x86_64/
—Bishoy
> On Mar 4, 2016, at 3:15 PM, Kaleb Keithley wrote:
>
>
>
> - Original Message -
>> From:
>>
>> Hi,
>>
>> I’m trying to
Hi,
I’m trying to install Gluster 3.7.8 RPMs on CentOS 6.5 and I get the following
error:
Error: Package: glusterfs-ganesha-3.7.8-1.el6.x86_64
(/glusterfs-ganesha-3.7.8-1.el6.x86_64)
Requires: /usr/bin/dbus-send
I’ve checked if dbus is installed or not, and I found those RPMs
I’m trying to use ZFS with Gluster 3.7.3 and I see the following error in the
logs:
E [MSGID: 106419] [glusterd-utils.c:4973:glusterd_add_inode_size_to_dict]
0-management: could not find (null) to getinode size for zpool/brick01 (zfs):
(null) package missing?
did anybody notice this error
I don’t understand why do u want to set quorum with only two servers. It
doesn’t make sense at all!
simply create a distributed replicated volume with no quorum on your two nodes
cluster, when one node goes down, the volume will still be RW without any
problems.
How come you do want to set
If you are using ZFS as the underlying filesystem.
ZFS by default stores the extended attributes in a hidden directory instead of
extending the file inode size like what XFS do!
There is a problem in ZFS on Linux implementation which the function
responsible for deleting the files, it deletes
find my answers inline.
— Bishoy
> On Feb 11, 2016, at 11:42 AM, Atul Yadav wrote:
>
> HI Team,
>
>
> I am totally new in Glusterfs and evaluating glusterfs for my requirement.
>
> Need your valuable input on achieving below requirement :-
> File locking
Gluster
Kris,
You can achieve what you want with Corosync-Pacemaker, Corosync is a heartbeat
and Pacemaker is a cluster manager.
You can create a pacemaker cluster using the hosts used for the Gluster
cluster, then configure a virtual IP resource and Gluster monitoring resources
with the count of the
find the answers inline.
— Bishoy
> On Dec 4, 2015, at 6:54 AM, Mountrakis, Michael
> wrote:
>
> Hi all
>
> The scenario that I am thinking to implement has as follows:
>
> Mount a volume locally to my Node1 as i-scsi:
> mybox1# iscsiadm -m node
you can do the following:
# gluster volume set $vol performance.o-thread-count 64
Today’s CPU are powerful enough to handle 64 threads per volume.
# gluster volume set $vol client.event-threads XX
XX depend on the number of connections from the FUSE client to the server, you
can get this number
Does any one run Gluster on IPv6???
-Bishoy
> On Oct 30, 2015, at 1:14 PM, Gmail <b.s.mikh...@gmail.com> wrote:
>
> Hello,
>
> I’m trying to use IPv6 with Gluster 3.7.5, but when I do peer probe, I get
> the following error:
>
> peer probe: failed: Probe re
try to restart glusterd on all the storage nodes.
-Bishoy
> On Oct 30, 2015, at 4:26 AM, Thomas Bätzler wrote:
>
> Hi,
>
> can somebody help me with fixing our 8 node gluster please?
>
> Setup is as follows:
>
> root@glucfshead2:~# gluster volume info
>
> Volume Name:
Hello,
I’m trying to use IPv6 with Gluster 3.7.5, but when I do peer probe, I get the
following error:
peer probe: failed: Probe returned with Transport endpoint is not connected
and the logs show the following:
E [MSGID: 101075] [common-utils.c:306:gf_resolve_ip6] 0-resolver: getaddrinfo
We have a build machine and the target machines are limited in their
capabilities. Hence cannot use packages.
Thank you,
Chitra
On Oct 15, 2013, at 6:16 PM, Joe Julian j...@julianfamily.org wrote:
Looks like maybe something's not where it's expected to be? Any reason why
you're not just
running glusterd --debug and see if that offers any better help. If
that doesn't, perhaps using strace could give you some better clues.
On 10/15/2013 7:07 PM, Chitra Gmail wrote:
We have a build machine and the target machines are limited in their
capabilities. Hence cannot use packages
Gmail wrote:
We have a build machine and the target machines are limited in their
capabilities. Hence cannot use packages.
Thank you,
Chitra
On Oct 15, 2013, at 6:16 PM, Joe Julian j...@julianfamily.org wrote:
Looks like maybe something's not where it's expected to be? Any reason why
I Have the same problem with 3.2.6, time to time in a randon basis some
server give-me the Transport endpoint not connected.
I Have to reboot the server to make it connect again.
I run Fedora 16 and Gluster 3.2.6-2
- Original Message -
From: Brian Candler b.cand...@pobox.com
To:
I Have the same problem here with gluster 3.1.2.
In Photoshop CS3 the JPG files don´t open.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Will be an Fedora Release of 3.1.3?
Because in http://download.gluster.com/pub/gluster/glusterfs/3.1/3.1.3/
there is no Fedora RPM.
Thank´s
___
Gluster-users mailing list
Gluster-users@gluster.org
Will be an Fedora Release of 3.1.3?
Because in http://download.gluster.com/pub/gluster/glusterfs/3.1/3.1.3/
there is no Fedora RPM.
Thank´s
___
Gluster-users mailing list
Gluster-users@gluster.org
What is the best Setup to use replication with 3 Servers in 3.0.4?
IS the type cluster/afr'?
Or there is another way I tried with type cluster/raid0 but din´t WORK.
Thank´s
Eduardo___
Gluster-users mailing list
Gluster-users@gluster.org
How to use glusterfs-volgen to creat an Replicate Volume of 3 Servers?
With option raid=1 I an only use 2 or 4 servers, not 3.
I want to use 3 Replicate identical servers.
Dows glusterfs-volgen support AFR?
Thank´s___
Gluster-users mailing list
37 matches
Mail list logo