Hello everybody,
I have some kind of a situation here
I want to move some volumes to new hosts. the idea is to add the new
bricks to the volume, sync and then drop the old bricks.
starting point is:
Volume Name: Server_Monthly_02
Type: Replicate
Volume ID: 0ada8e12-15f7-42e9-9da3-2734b04e04e9
On 09/04/2018 21:36, Shyam Ranganathan wrote:
On 04/09/2018 04:48 AM, Marco Lorenzo Crociani wrote:
On 06/04/2018 19:33, Shyam Ranganathan wrote:
Hi,
We postponed this and I did not announce this to the lists. The number
of bugs fixed against 3.10.12 is low, and I decided to move this to the
3
Dear all,
I encountered the same issue, I saw that this is fixed in 3.12.7 but I
cannot find this release in the main repo (centos storage SIG), only in
the test one.
What is the expectation to see this release available into the main repo?
Greetings,
Paolo
Il 09/03/2018 10:41, Stefan So
Hello Gluster Community,
according to that set steps I have configured network encryption for
management and I/O traffic:
https://www.cyberciti.biz/faq/how-to-enable-tlsssl-encryption-with-glusterfs-storage-cluster-on-linux/
I have chose the option for self-signed certificates, so each of the nod
Hi,
Could you help me?
i have a problem with file on disperse volume. When i try to read this from
mount point i recieve error,
# md5sum /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2
md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error
Configuration and status of volume is:
#
On 09/04/18 22:15, Vincent Royer wrote:
Thanks,
The 3 servers are new Lenovo units with redundant PS backed by two
huge UPS units (on for each bank of power supplies). I think the
chances of losing two nodes is incredibly slim, and in that case a
Disaster Recovery from offsite backups would
Guess you went through user lists and tried something like this already
http://lists.gluster.org/pipermail/gluster-users/2018-April/033811.html
I have a same exact setup and below is as far as it went after months of
trail and error.
We all have somewhat same setup and same issue with this - you ca
All,
Thanks to Nigel, this is now deployed, and any new patches referencing
github (ie, new features) need the 'DocApproved' and 'SpecApproved' label.
Regards,
Amar
On Mon, Apr 2, 2018 at 10:40 AM, Amar Tumballi wrote:
> Hi all,
>
> A better documentation about the feature, and also informatio