On 02/26/2018 02:03 PM, Shyam Ranganathan wrote:
> Hi,
>
> RC1 is tagged in the code, and the request for packaging the same is on
> its way.
>
> We should have packages as early as today, and request the community to
> test the same and return some feedback.
>
> We have about 3-4 days (till
Hi,
Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so
I guess this bug is also fixed there.
Best regards,
M.
‐‐‐ Original Message ‐‐‐
On February 27, 2018 9:38 AM, Hari Gowtham wrote:
>
>
> Hi Mabi,
>
> The bugs is fixed from
Does anyone have any ideas about how to fix, or to work-around the
following issue?
Thanks!
Bug 1549714 - On sharded tiered volume, only first shard of new file
goes on hot tier.
https://bugzilla.redhat.com/show_bug.cgi?id=1549714
On sharded tiered volume, only first shard of new file goes on
Dear all,
I identified the source of the problem:
if i set "server.root-squash on", then the problem is 100% reproducible,
with "server.root-squash off", the problem vanishes.
This is true for glusterfs 3.12.3, 3.12.4 and 3.12.6 (haven't tested other
versions)
best wishes,
Stefan
--
Dr.
What is your gluster setup? Please share volume details where vms ate
stored. It could be that the slow host is having arbiter volume.
Alex
On Feb 26, 2018 13:46, "Ryan Wilkinson" wrote:
> Here is info. about the Raid controllers. Doesn't seem to be the culprit.
>
> Slow
Hi Mabi,
The bugs is fixed from 3.11. For 3.10 it is yet to be backported and
made available.
The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259.
On Sat, Feb 24, 2018 at 4:05 PM, mabi wrote:
> Dear Hari,
>
> Thank you for getting back to me after having analysed
On Tue, Feb 27, 2018 at 1:40 PM, Dave Sherohman wrote:
> On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote:
> > I will try to explain how you can end up in split-brain even with cluster
> > wide quorum:
>
> Yep, the explanation made sense. I hadn't
Hello Gluster Community,
while reading that article:
https://github.com/gluster/glusterfs-specs/blob/master/under_review/worm-compliance.md
there seems to be an interesting feature planned for the WORM Xlator:
*Scheduled Auto-commit*: Scan Triggered Using timeouts for untouched files.
The next
Hi David,
Yes it is a good to have feature, but AFAIK it is currently not in the
priority/focus list.
If anyone from community is interested in implementing this, is most
welcome.
Otherwise you need to wait for some more time until it comes to focus.
Thanks & Regards,
Karthik
On Tue, Feb 27,
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman wrote:
> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on
On Tue, Feb 27, 2018 at 05:50:49PM +0530, Karthik Subrahmanya wrote:
> gluster volume add-brick replica 3 arbiter 1 2>
> is the command. It will convert the existing volume to arbiter volume and
> add the specified bricks as arbiter bricks to the existing subvols.
> Once they are successfully
Any updates on this one?
On Mon, Feb 5, 2018 at 8:18 AM, Tom Fite wrote:
> Hi all,
>
> I have seen this issue as well, on Gluster 3.12.1. (3 bricks per box, 2
> boxes, distributed-replicate) My testing shows the same thing -- running a
> find on a directory dramatically
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.
What's the significance of this? I'm trying to find documentation on
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > Since arbiter bricks need not be of same size as the data bricks, if you
> > > can configure three more arbiter bricks
> > > based on the guidelines in the doc [1], you can do it live and you will
> > > have the
We got extremely slow stat calls on our disperse cluster running latest
3.12 with clients also running 3.12.
When we downgraded clients to 3.10 the slow stat problem went away.
We later found out that by disabling disperse.eager-lock we could run the
3.12 clients without much issue (a little bit
Hi Jeff,
Tier and shard are not supported together.
There are chances for more bugs to be there in this area as there
wasn't much effort put into it.
And I don't see this support to be done in the near future.
On Tue, Feb 27, 2018 at 11:45 PM, Jeff Byers wrote:
> Does
All volumes are configured as replica 3. I have no arbiter volumes.
Storage hosts are for storage only and Virt hosts are dedicated Virt
hosts. I've checked throughput from the Virt hosts to all 3 gluster hosts
and am getting ~9Gb/s.
On Tue, Feb 27, 2018 at 1:33 AM, Alex K
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Hi,
Yes. This bugs is fixed in 3.12.
On Wed, Feb 28, 2018 at 1:18 AM, mabi wrote:
> Hi,
>
> Thanks for the link to the bug. We should be hopefully moving soon onto 3.12
> so I guess this bug is also fixed there.
>
> Best regards,
> M.
>
>
> ‐‐‐ Original Message ‐‐‐
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are
21 matches
Mail list logo