Thanks Joe for the correction. However as of now we do not have a
mechanism to let the clients know how should they behave based on the
op-version just like what glusterd does. This is the only implication
why we always encourage to upgrade the servers first followed by the
clients.
~Atin
On 08/1
On 08/13/2015 07:42 PM, Andreas Hollaus wrote:
Isn't the trusted.afr.dirty attribute missing from Brick 2? Shouldn't
it be increased and decreased, but never removed?
If one brick of a replica 2 setup is down, and files are written to, the
dirty xattr is never set on the brick that is up. I
On 08/13/2015 07:30 PM, Lakshmi Anusha wrote:
Hello,
We managed to collect below command outputs:
Brick1> getfattr -d -m . -e hex
/opt/lvmdir/c2/brick/logfiles/security/EVENT_LOG.xml
getfattr: Removing leading '/' from absolute path names
# file: opt/lvmdir/c2/brick/logfiles/security/EVENT_
"... a client which is running higher version than the server [...]
glusterd will reject the mount request." is incorrect. You are correct
in your understanding that the rpc is supposed to be cross-version
compatible with newer features not being allowed as long as older
versions are in the cl
Ok, Thank you, I had read online where gluster supported backwards
compatibility, with respect to newer client versions working with older server
versions. Thanks for the clarification…
From: Atin Mukherjee [mailto:atin.mukherje...@gmail.com]
Sent: Thursday, August 13, 2015 10:46 AM
To: Taylor
I am currently testing gluster v3.7.3 on Scientific Linux 7.1 and a newly
created gluster volume. After transferring some files to the volume over
the fuse mount, the volume log is flooded with 2.5GB of errors like the
following:
[2015-08-13 15:54:36.921622] W [fuse-bridge.c:1230:fuse_err_cbk]
0-g
You can find the gluster news of the week #30/2015 below
https://medium.com/@msvbhat/gluster-news-of-the-week-30-2015-30452f44a144
You should be able to see the same in www.planet.gluster.org very shortly.
If you have anything that needs to be mentioned in news of the week, please
add them here
-Atin
Sent from one plus one
On Aug 13, 2015 9:11 PM, "Taylor Lewick" wrote:
>
> As a follow up, I created a test gluster cluster, installed 3.6, and
upgraded it to 3.7. I verified client machines running glusterfs 3.6 and
3.7 could mount the volume via glusterfs…
>
>
>
> So currently, clients ru
As a follow up, I created a test gluster cluster, installed 3.6, and upgraded
it to 3.7. I verified client machines running glusterfs 3.6 and 3.7 could
mount the volume via glusterfs…
So currently, clients running 3.7 can’t mount a gluster volume via glusterfs if
the gluster servers are runnin
Can we have some volunteers of these BZs?
-Atin
Sent from one plus one
On Aug 12, 2015 12:34 PM, "Kaushal M" wrote:
> Hi Csaba,
>
> These are the updates regarding the requirements, after our meeting
> last week. The specific updates on the requirements are inline.
>
> In general, we feel that t
Hi,
Isn't the trusted.afr.dirty attribute missing from Brick 2? Shouldn't it be
increased
and decreased, but never removed?
Could that be the reason why GlusterFS is confused?
What could be the reason for gfid mismatches?
Regards
Andreas
> Brick1> getfattr -d -m . -e hex config.ior
> # file: c
On Mon, Aug 10, 2015 at 09:19:25AM +0100, Thibault Godouet wrote:
> Thanks Niel for your helpful answer.
>
> Regarding the locking, indeed that solves my issue. Now I'm wondering how
> to monitor this. The best I have so far is get the list of RPC binds and
> the TCP/UDP port in particular, and th
Hi,
Previously we used GlusterFS v3.5.3 and we could achieve around 1.1GBs for
read/write in plain distributed volume.
Now, since I have upgraded GlusterFS to 3.7.3, i can only achieve around 700MBs
(max).
Here some additional information concerning my volume.
# gluster volume info vol_workd
Hi Prasun,
pNFS was recently released in a "tech-preview" form. With multiple
MDS-es or even an all-symmetric arch (every ganesha node can act as both
DS and MDS, which will also be a supported config) you could potentially
see improvements (due to increased throughput) but, from our experimen
Suggestion for future: More granular auto-delete for snapshots. Something
like a sliding window. For eg. Keep hourly snapshots for last one day,
daily snapshots for the past week, weekly ones for the month etc. i.e.
Tapering frequency as you go further in the past. Right now, I think it's
just a to
15 matches
Mail list logo