On 1 February 2017 at 19:30, Jesper Led Lauridsen TS Infra server wrote:
> Arbiter, isn't that only used where you want replica, but same storage
> space.
>
> I would like a distributed volume where I can write, even if one of the
> bricks fail. No replication.
>
>
DHT does not
Release notes are still pending, other comments *inline*.
Tomorrow we will release beta1 for the package maintainers to roll out
the packages.
On 01/31/2017 08:42 PM, Shyam wrote:
Update day 2:
1) Brick multiplexing patch is merged into master, and now backported to
release-3.10. Yay!
dear all
should gluster update geo repl when a volume changes?
eg. bricks are added, taken away.
reason I'm asking is because it doe not seem like gluster is
doing it on my systems?
Well, I see gluster removed a node form geo-repl, brick that
I removed.
But I added a brick to a vol and it's
On 01/02/17 19:30, lejeczek wrote:
On 01/02/17 14:44, Atin Mukherjee wrote:
I think you have hit
https://bugzilla.redhat.com/show_bug.cgi?id=1406411 which
has been fixed in mainline and will be available in
release-3.10 which is slated for next month.
To prove you have hit the same
On 01/02/17 14:44, Atin Mukherjee wrote:
I think you have hit
https://bugzilla.redhat.com/show_bug.cgi?id=1406411 which
has been fixed in mainline and will be available in
release-3.10 which is slated for next month.
To prove you have hit the same problem can you please
confirm the
I think you have hit https://bugzilla.redhat.com/show_bug.cgi?id=1406411
which has been fixed in mainline and will be available in release-3.10
which is slated for next month.
To prove you have hit the same problem can you please confirm the following:
1. Which Gluster version are you running?
hi,
I have a four peers gluster and one is failing, well, kind of..
If on a working peer I do:
$ gluster volume add-brick QEMU-VMs replica 3
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
force
volume add-brick: failed: Commit failed on whale.priv Please
check log file for
Arbiter, isn't that only used where you want replica, but same storage space.
I would like a distributed volume where I can write, even if one of the bricks
fail. No replication.
Thanks
Jesper
> -Oprindelig meddelelse-
> Fra: Cedric Lemarchand [mailto:yipik...@gmail.com]
> Sendt: 1.
Short answer : I think you need to add an arbiter node, this way the cluster
keeps being writable when there is at least 2 nodes presents (eg 1 data node is
down). This solve the split brain case where only 2 nodes are involved in the
setup.
Cheers
--
Cédric Lemarchand
> Le 1 févr. 2017 à
Hi,
I am wondering if it is possible to create an always writeable distributed
volume.
Reading the documentation I can figure out how. So is it possible?
If I understand the docs correctly. The DHT determines based on a hash of the
filename, which brick to place the file. And if you have two
hi everone,
trying geo-repl first, I've followed that official howto and the
process claimed "success" up until I went for status: "Faulty"
Errors I see:
...
[2017-02-01 12:11:38.103259] I [monitor(monitor):268:monitor]
Monitor: starting gsyncd worker
We have a new development release of GD2.
GD2 now supports volfile fetch and portmap requests, so clients are
finally able to mount volumes using the mount command. Portmap doesn't
work reliably yet, so there might be failures.
GD2 was refactored to clean up the main function and standardize the
12 matches
Mail list logo