hi all
I want to have backup for my glusterfs storage for disaster recovery. But
I don't want to use the geo-replication solution.
Is there any way to use a snapshot for disaster recovery? I don't know if I
am right or no??but is it the right way??
first: make a initial copy of the storage in
0-99] everything will be ok,
but my today experience shows that I was wrong.
any help will be appreciated ;)
On Mon, Feb 5, 2018 at 4:04 PM, atris adam <atris.a...@gmail.com> wrote:
> I have mounted the halo glusterfs volume in debug mode, and the output is
> as follows:
> .
>
ncy 0' is out of range [1 -
9]
so I think the range [1 - 9] should change to [0 - 9], so I can get
the desired brick selection for halo feature, am I right? If not, why the
halo decide to mark down the best brick which has ping time bellow 0.5ms?
On Sun, Feb 4, 2018 at 2:27 PM, atris a
I have 2 data centers in two different region, each DC have 3 severs, I
have created glusterfs volume with 4 replica, this is glusterfs volume info
output:
Volume Name: test-halo
Type: Replicate
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1:
I have made a distributed replica3 volume with 6 nodes. I mean this:
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.0.0.2:/brick
Brick2:
Hi all
I have two data center in two different region (A and B), I have created
glusterfs volume with replica 3, one replica is in one region and other two
replica is in another region.
I have enabled the hale replication feature, I mount the volume in data
center A with its public ip and mount
Hi all
I have two data center in two different region (A and B), I have created
glusterfs volume with replica 3, one replica is in one region and other two
replica is in another region.
I have enabled the hale replication feature, I mount the volume in data
center A with its public ip and mount
geo-replication. It's not
>> planned in near future. We will let you know when it's planned.
>>
>> Thanks,
>> Kotresh HR
>>
>> On Tue, Oct 24, 2017 at 11:19 AM, atris adam <atris.a...@gmail.com>
>> wrote:
>>
>>> hi everybody,
>
a
glusterfs volume?
On Tue, Oct 24, 2017 at 1:47 PM, <lemonni...@ulrar.net> wrote:
> Hi,
>
> You can, but unless the two datacenters are very close, it'll be slow as
> hell. I tried it myself and even a 10ms ping between the bricks is
> horrible.
>
> On Tue, Oct 24, 2017
Hi
I have two data centers, each of them have 3 servers. This two data centers
can see each other over the internet.
I want to create a distributed glusterfs volume with these 6 servers, but I
have only one valid ip in each data center. Is it possible to create a
glusterfs volume?Can anyone guide
hi everybody,
Have glusterfs released a feature named active-active georeplication? If
yes, in which version it is released? If no, is it planned to have this
feature?
___
Gluster-users mailing list
Gluster-users@gluster.org
hi everybody,
as "
http://www.itzgeek.com/how-tos/linux/centos-how-tos/install-and-configure-glusterfs-on-centos-7-rhel-7.html;
says:
*GlusterFS *is an open-source, scalable network filesystem *suitable *for
high data-intensive workloads such as media streaming, cloud storage, and*
CDN (Content
Hi everybody, I have a question about network interface used for tiering
in glusterfs, if I have a 1G nic on glusterfs servers and clients, can I
get more performance by setting up glusterfs tiering?or the network
interface should be 10G?
___
hi all,
I want to know some more detail about glusterfs georeplication, more about
syncdeamon, if 'file A' was mirorred in slave volume , a change happen to
'file A', then how the syncdeamon act?
1. transfer the whole 'file A' to slave
2. transfer the changes of file A to slave
thx lot
I have 7 servers, I want to have openstack on them and also setup glusterfs
storage, which on is better?
1. deploy openstack on all 7 servers and setup glusterfs storage on vms
2. deploy openstack on some servers and setup glusterfs storage on other
servers (phycial nodes)?
which models are more
Hi everybody
I have a some question about throughput for glusterfs volume.
I have 3 server for glusterfs, each have one brick and 1GbE for their
network, I have made distributed replica 3 volume with these 3 bricks. the
network between the clients and the servers are 1GbE.
refer to this link:
I have setup a distributed glusterfs volume with 3 servers. the network is
1GbE, i get filebench test with a client.
refer to this link:
https://s3.amazonaws.com/aws001/guided_trek/Performance_in_a_Gluster_Systemv6F.pdf
the more server for gluster, more throughput should gain. I have tested
Hello everybody
I have 3 datacenters in different regions, Can I deploy my own cloud
storage with the help of glusterfs on the physical nodes?If I can, what are
the differences between cloud storage glusterfs and local gluster storage?
thx for your attention :)
Hello everybody,
I have created a glusterfs volume with local ip, I want to mount the
glusterfs volume with valid Ip somewhere else. The client machine can not
ping the local ip, I mean it is in another network.
Is it possible to create a glusterfs volume in one network lan and mount it
in
Hi everybody,
I have two data centers in two different provinces. Each data centers have
3 servers. I want to setup cloud storage with glusterfs.
I want to make one glusterfs volume with these information.
province "a"==> 3 servers, each server has one 5TB brick (bricks number is
from 1-3)
I want to use glusterfs as a storage for ESXI , I have mount it via NFS. I
want to make a VM with the zerodeager thick option, but it is disabled.I
have searched alot and find out that NFS does not allow to create thick vm
unless the hardware acceleration is supported on the storage.
ESXI
I want to use glusterfs as a storage for ESXI , I have mount it via NFS. I
want to make a VM with the zerodeager thick option, but it is disabled.I
have searched alot and find out that NFS does not allow to create thick vm
unless the hardware acceleration is supported on the storage.
ESXI
22 matches
Mail list logo