updated test server to 3.8.3
Brick1: 192.168.71.10:/gluster2/brick1/1
Brick2: 192.168.71.11:/gluster2/brick2/1
Brick3: 192.168.71.12:/gluster2/brick3/1
Options Reconfigured:
cluster.granular-entry-heal: on
performance.readdir-ahead: on
performance.read-ahead: off
nfs.disable: on
On Tue, Aug 30, 2016 at 10:02 AM, David Gossage wrote:
> updated test server to 3.8.3
>
> Brick1: 192.168.71.10:/gluster2/brick1/1
> Brick2: 192.168.71.11:/gluster2/brick2/1
> Brick3: 192.168.71.12:/gluster2/brick3/1
> Options Reconfigured:
>
On Tue, Aug 30, 2016 at 8:52 AM, David Gossage
wrote:
> On Tue, Aug 30, 2016 at 8:01 AM, Krutika Dhananjay
> wrote:
>
>>
>>
>> On Tue, Aug 30, 2016 at 6:20 PM, Krutika Dhananjay
>> wrote:
>>
>>>
>>>
>>> On Tue, Aug 30, 2016
On Tue, Aug 30, 2016 at 8:52 AM, David Gossage
wrote:
> On Tue, Aug 30, 2016 at 8:01 AM, Krutika Dhananjay
> wrote:
>
>>
>>
>> On Tue, Aug 30, 2016 at 6:20 PM, Krutika Dhananjay
>> wrote:
>>
>>>
>>>
>>> On Tue, Aug 30, 2016
Hello,
We have a 5 node gluster setup with the following configuration;
[root@gb0015nasslow01 Test]# gluster vol info all
Volume Name: data
Type: Tier
Volume ID: 702daa3d-b3fa-4e66-bcec-a13f7ec1d47d
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
The problem must have started because of an upgrade to 3.7.12 from an older
version. Not sure exactly how.
> On Aug 30, 2016, at 10:44 AM, Sergei Gerasenko wrote:
>
> It seems that it did the trick. The usage is being recalculated. I’m glad to
> be posting a solution to the
It seems that it did the trick. The usage is being recalculated. I’m glad to be
posting a solution to the original problem on this thread. It’s so frequent
that threads contain only incomplete or partially complete solutions.
Thanks,
Sergei
> On Aug 29, 2016, at 3:41 PM, Sergei Gerasenko
Hi Sergei,
Apologies for the delay. I am extremely sorry, I was struck on something
important
It's great that you figured out the solution.
Whenever you set a dirty flag as mentioned in the previous thread, the
quota values will be recalcualted.
Yep, as you mentioned there are lot of changes
Hi
We have just migrated our data to a new file server (more space, old
server was showing its age). We have a volume for collaborative use,
based on group membership. In our new server, the group write
permissions are not being respected (e.g. the owner of a directory can
still write to
Tried this.
With me, only 'fake2' gets healed after i bring the 'empty' brick back up
and it stops there unless I do a 'heal-full'.
Is that what you're seeing as well?
-Krutika
On Wed, Aug 31, 2016 at 4:43 AM, David Gossage
wrote:
> Same issue brought up glusterd
Hi Gluster team,
The weekly Gluster bug triage is about to take place in 50 min.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d
On Tue, Aug 30, 2016 at 7:18 AM, Krutika Dhananjay
wrote:
> Could you also share the glustershd logs?
>
I'll get them when I get to work sure.
>
> I tried the same steps that you mentioned multiple times, but heal is
> running to completion without any issues.
>
> It must
On Tue, Aug 30, 2016 at 6:07 PM, David Gossage
wrote:
> On Tue, Aug 30, 2016 at 7:18 AM, Krutika Dhananjay
> wrote:
>
>> Could you also share the glustershd logs?
>>
>
> I'll get them when I get to work sure.
>
>
>>
>> I tried the same steps
On Tue, Aug 30, 2016 at 6:20 PM, Krutika Dhananjay
wrote:
>
>
> On Tue, Aug 30, 2016 at 6:07 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Aug 30, 2016 at 7:18 AM, Krutika Dhananjay
>> wrote:
>>
>>> Could you also share the
Could you also share the glustershd logs?
I tried the same steps that you mentioned multiple times, but heal is
running to completion without any issues.
It must be said that 'heal full' traverses the files and directories in a
depth-first order and does heals also in the same order. But if it
Hi,
I'm about to bump a 1x3 (replicated) volume up to 2x3, but I just realised the
3 new servers
are physically in the same datacenter. Is there a safe way to switch a brick
from the first
replica set with one from the second replica set ?
The only way I see how would be to go down to replica
On Tue, Aug 30, 2016 at 7:50 AM, Krutika Dhananjay
wrote:
>
>
> On Tue, Aug 30, 2016 at 6:07 PM, David Gossage <
> dgoss...@carouselchecks.com> wrote:
>
>> On Tue, Aug 30, 2016 at 7:18 AM, Krutika Dhananjay
>> wrote:
>>
>>> Could you also share the
On Mon, Aug 29, 2016 at 11:25 PM, Darrell Budic
wrote:
> I noticed that my new brick (replacement disk) did not have a .shard
> directory created on the brick, if that helps.
>
> I removed the affected brick from the volume and then wiped the disk, did
> an add-brick, and
18 matches
Mail list logo