Yes Atin. I'll take a look.
On Wed, Dec 20, 2017 at 11:28 AM, Atin Mukherjee wrote:
> Looks like a bug as I see tier-enabled = 0 is an additional entry in the
> info file in shchhv01. As per the code, this field should be written into
> the glusterd store if the op-version
Looks like a bug as I see tier-enabled = 0 is an additional entry in the
info file in shchhv01. As per the code, this field should be written into
the glusterd store if the op-version is >= 30706 . What I am guessing is
since we didn't have the commit 33f8703a1 "glusterd: regenerate volfiles on
Hi,
Can you provide the
- volume info
- shd log
- mount log
of the volumes which are showing pending entries, to debug the issue.
Thanks & Regards,
Karthik
On Wed, Dec 20, 2017 at 3:11 AM, Matt Waymack wrote:
> Mine also has a list of files that seemingly never heal. They
I was attempting the same on a local sandbox and also have the same problem.
Current: 3.8.4
Volume Name: shchst01
Type: Distributed-Replicate
Volume ID: bcd53e52-cde6-4e58-85f9-71d230b7b0d3
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1:
I have not done the upgrade yet. Since this is a production cluster I
need to make sure it stays up or schedule some downtime if it doesn't
doesn't. Thanks.
On Tue, Dec 19, 2017 at 10:11 AM, Atin Mukherjee wrote:
>
>
> On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki
Mine also has a list of files that seemingly never heal. They are usually
isolated on my arbiter bricks, but not always. I would also like to find an
answer for this behavior.
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On
I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ).
After a server crash, "gluster peer status" reports all peers as connected.
"gluster volume status detail" shows that all bricks are up and running
with the right size, but when I use df from a client mount point, the size
Ben,
For this set of tests we are using bricks provisioned on RAID storage. We are
not trying to test performance of tiered volume right now. The goal is to find
solution to handle large files that do not fit into hot tier.
You are correct that there is a lot of promotions and demotions of
Hello list,
I'm not sure what to look for here, not sure if what I'm seeing is the
actual "backlog" (that we need to make sure is empty while performing a
rolling upgrade before going to the next node), how can I tell, while
reading this, if it's okay to reboot / upgrade my next node in the pool
On Tue, Dec 19, 2017 at 1:10 AM, Ziemowit Pierzycki
wrote:
> Hi,
>
> I have a cluster of 10 servers all running Fedora 24 along with
> Gluster 3.8. I'm planning on doing rolling upgrades to Fedora 27 with
> Gluster 3.12. I saw the documentation and did some testing but
On Mon, Dec 18, 2017 at 06:10:29PM +0100, Michael Adam wrote:
>
> Heketi v5.0.1 is now available.
Packages for the CentOS Storage SIG are now becomnig available in the
testing repository. Packages can be obtained (soon) with the following
steps:
# yum --enablerepo=centos-gluster*-test update
11 matches
Mail list logo