Hello,

On 02/25/2016 11:42 AM, Simone Taliercio wrote:
Hi Ravi,

Thanks a lot for your prompt reply!

2016-02-25 6:07 GMT+01:00 Ravishankar N <ravishan...@redhat.com>:
I don't know what your use case is but I don't think you want to create so many 
replicas.
I need to scale my application on multiple nodes because we have to
tackle an high number of requests per second. The application is
hosted on AWS EC2 instances and each one uses EBS.
Each instance needs to read some local files. That's why I'm currently
replicating all the files on each instance. So far I had max 3 nodes.

  Why not just create a glusterfs volume  of replica-2 or replica-3 or even an 
arbiter volume, and mount them on all 17 nodes?
Here I'm definitely missing some basics (sorry for that): what are the
steps to set-up 3-replicas (+1 arbiter) but still allow all 17 nodes
to retrieve the files ?
The steps are almost the same as replica -3, except for a small change in the syntax: `gluster volume create <volname> replica 3 arbiter 1 <host1:brick> <host2:brick> <host1:brick> <host3:brick>`
Are they transferred on demands over the
network ?
It's just like accessing an NFS share. You mount the volume on any machine and you can perform I/O.


So far I followed those steps:
- mount an empty "hard disk" to the instance
- format it in xfs
- create directory for one brick
- add peers
- create 1 volume with "... replica 3 <host1:brick> <host2:brick>...."
- mount the volume on a specific path

And then I can see one file that is created on one machine being
replicated on all the others. I have a limited vision at this use case
Right. You can mount the replica 3 volume that you just created on any node. Like I said it's just like accessing a remote share. Except that the 'share' is a glusterfs volume that you just created. If I understand your use case correctly, you would need to create a glusterfs volume based on 3 EC2 instances and then mount the volume on all of the (17?) instances on which your application runs.

HTH,
Ravi
:-/

I think replica 4 is what some people in the community have used at the max but 
even that is an overkill IMO.
Help :)

Thanks,
Simone

Regards,
Ravi






_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to