t
stretch main
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
> > My objective is to remove nodes B and C entirely.
> >
> > First up is to pull their bricks from the volume:
> >
> > # gluster volume rem
OK, I'm just careless. Forgot to include "start" after the list of
bricks...
On Fri, Jun 28, 2019 at 04:03:40AM -0500, Dave Sherohman wrote:
> On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:
fa7f6e5.10724
-rw-r--r-- 2 root libvirt-qemu 4194304 Apr 11 2018
c953c676-152d-4826-80ff-bd307fa7f6e5.3101
--- cut here ---
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
e "-T" permissions are internal files and can be
> ignored. Ravi and Krutika, please take a look at the other files.
>
> Regards,
> Nithya
>
>
> On Fri, 28 Jun 2019 at 19:56, Dave Sherohman wrote:
>
> > On Thu, Jun 27, 2019 at 12:17:10PM +05
a of the data.
As you can see, using an arbiter gives you (nearly) as much data
security as an additional replica, while consuming a tiny, tiny fraction
of the space that would be "lost" to an additional full replica. If
you're trying to maximize usable capacity in your volu
Status of Volume palantir
--
Task : Rebalance
ID : c38e11fe-fe1b-464d-b9f5-1398441cc229
Status : completed
--
Dave Sherohman
___
Gluster-users mailing list
Glust
verything from /var/local/brick0, and then
re-add it to the cluster as if I were replacing a physically failed
disk? Seems like that should work in principle, but it feels dangerous
to wipe the partition and rebuild, regardless.
On Tue, Feb 13, 2018 at 07:33:44AM -0600, Dave Sherohman wrote:
>
fsck it was
enough to trigger gluster to recheck everything. I'll check after it
finishes to see whether this ultimately resolves the issue.
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/
On Fri, Feb 16, 2018 at 05:44:43AM -0600, Dave Sherohman wrote:
> On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote:
> > Have you checked for any file system errors on the brick mount point?
>
> I hadn't. fsck reports no errors.
>
> > What about the heal? Doe
-3-4-5-6 still together, then brick 1
will recognize that it doesn't have volume-wide quorum and reject
writes, thus allowing brick 2 to remain authoritative and able to accept
writes.
--
Dave Sherohman
___
Gluster-users mailing list
Gl
months already with the current
configuration and there are several virtual machines running off the
existing volume, so I'll need to reconfigure it online if possible.
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
esystem Size Used Avail Use% Mounted on
/dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0
and the other four have
$ df -h /var/local/brick0
Filesystem Size Used Avail Use% Mounted on
/dev/sdb111T 254G 11T 3% /var/local/brick0
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
ocated for arbiter bricks if it would be sigificantly
simpler and safer than repurposing the existing bricks (and I'm getting
the impression that it probably would be). Does it particularly matter
whether the arbiters are all on the same node or on three separate
nodes?
--
Dave Sherohman
__
brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu
brick] [mordiggian brick] [arbiter 3]
Or is there more to it than that?
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
re successfully added, self heal should start automatically and
> you can check the status of heal using the command,
> gluster volume heal info
OK, done and the heal is in progress. Thanks again for your help!
--
Dave Sherohman
___
Gluster-users
hey store only file metadata, not file contents, so you can
just scrape up a little spare disk space on two of your boxes, call that
space an arbiter, and run with it. In my case, I have 10T data bricks
and 100G arbiter bricks; I'm using a total of under 1G across all
arbiter bricks for 3T of d
be.
In my case, I have three subvolumes (three replica pairs), which means I
need three arbiters and those could be spread across multiple nodes, of
course, but I don't think saying "I want 12 arbiters instead of 3!"
would be supported.
--
Dave Sherohman
__
r how to access the
volume.)
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
datastore if you use replication since all writes are
> multiplied.
Yep, that's the price you pay for HA.
Also, although the writes are multiplied, they're also (at least
partially) concurrent, so performance isn't as bad as "divide by the
number of replicas".
--
Dave
what is the command used to build this?
# gluster volume create my-volume replica 3 arbiter 1 host1:/path/to/brick
host2:/path/to/brick arb-host1:/path/to/brick host4:/path/to/brick
host5:/path/to/brick arb-host2:/path/to/brick host3:/path/to/brick
host6:/path/to/brick arb-host3:/path/to
path/to/brick (arb-)host6:/path/to/brick2 host3:/path/to/brick
> host6:/path/to/brick (arb-)host1:/path/to/brick2
>
> is this a sane command?
Yep, looks reasonable to me aside from the "replica 2" needing to be
"replica 3".
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
ver a full day later, it's still at 59.
Is there anything I can do to kick the self-heal back into action and
get those final 59 entries cleaned up?
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.o
On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote:
> Is there anything I can do to kick the self-heal back into action and
> get those final 59 entries cleaned up?
In response to the request about what version of gluster I'm running
(...which I deleted prematurely...
On Fri, Sep 07, 2018 at 10:46:01AM +0530, Pranith Kumar Karampuri wrote:
> On Tue, Sep 4, 2018 at 6:06 PM Dave Sherohman wrote:
>
> > On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote:
> > > Is there anything I can do to kick the self-heal back into action and
to mean making changes to it.
> May be that may convince you to re-consider your stance about the
> upgrade to one of the active stable releases on gluster and then we
> can see if you still face the problem and we could help fix it in
> further releases.
Sounds
e done live? About how long
should we expect it to take to upgrade a 23T (4.5T used) replica 2+A
volume with three subvolumes?
--
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
what other nodes it might attempt to connect to.
I primarily use gluster for VM disk images, so, in my case, I list all
the gluster nodes in the VM definition and, if the first one isn't
reachable, then it tries the second and so on until it finds one that's
available to connect to.
Wha
ve a fully-consistent cluster again?
On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote:
> Last Friday, I rebooted one of my gluster nodes and it didn't properly
> mount the filesystem holding its brick (I had forgotten to add it to
> fstab...), so, when I got back to
y the complete list of bricks in any add-brick command.
So if you have bricks D1-2, D1-2, D2-1, D2-2, D3-1, and D3-2, adding
arbiters (A-1 through A-3) would be
gluster volume add-brick MyVolume replica 3 arbiter 1 D1-1 D1-2 A-1 D2-1 D2-2
A-2 D3-1 D3-2 A
30 matches
Mail list logo