Re: [Gluster-users] Where are the latest geo-replication docs
This might help. https://github.com/gluster/glusterfs/blob/release-3.6/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md Let me know if you need any help in setting up geo-replication. -- regards Aravinda http://aravindavk.in On 04/11/2015 12:05 AM, Eric Berg wrote: I’m trying to set up geo-replication, but have only found older docs. I’m running 3.6. I have found http://blog.gluster.org/2015/04/glusterfs-geo-replication-tutorials-understanding-session-creation-2/, which is a presentation, not technical docs, but in there it’s suggested to run “gluster system:: execute gsec_create” to create the keys on the master, but I’m getting this error when I execute that command: unrecognized word: execute (position 1) Generally speaking, I’m looking for complete contemporary set up docs. Thanks. Eric ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Glusterfs performance tweaks
There is something that's not clear in what you are describing. Gluster doesn't come into play until you access your data through the gulsterfs mount. You can even stop your gluster volume and stop the glusterfs daemon to confirm that it is not really interfering with your writes to the brick in any way. What you are describing sounds like an issue with the way you have partitioned your drive or set up the filesystem, which is probably xfs in case of glusterfs if you are using defaults. Are you comparing the same file system in both your cases ? On Fri, Apr 10, 2015 at 11:45 AM, Punit Dambiwal hypu...@gmail.com wrote: Hi Ben, That means if i will not attach the SSD in to brick...even not install glusterfs on the server...it gives me throughput about 300mb/s but once i will install glusterfs and add this ssd in to glusterfs volume it gives me 16 mb/s... On Fri, Apr 10, 2015 at 9:32 PM, Ben Turner btur...@redhat.com wrote: - Original Message - From: Punit Dambiwal hypu...@gmail.com To: Ben Turner btur...@redhat.com Cc: Vijay Bellur vbel...@redhat.com, gluster-users@gluster.org Sent: Thursday, April 9, 2015 9:36:59 PM Subject: Re: [Gluster-users] Glusterfs performance tweaks Hi Ben, But without glusterfs if i run the same command with dsync on the same ssd...it gives me good throughput...all setup (CPU,RAM,Network are same) the only difference is no glusterfs... [root@cpu09 mnt]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 0.935646 s, 287 MB/s [root@cpu09 mnt]# [image: Inline image 1] But on the top of the glusterfs it gives too slow performancei run the ssd trim every night to clean the garbage collection...i think there is something need to do from gluster or OS side to improve the performanceotherwise no use to use the ALL SSD with gluster because with all SSD you will get the performance slower then SATA On Fri, Apr 10, 2015 at 2:12 AM, Ben Turner btur...@redhat.com wrote: - Original Message - From: Punit Dambiwal hypu...@gmail.com To: Vijay Bellur vbel...@redhat.com Cc: gluster-users@gluster.org Sent: Wednesday, April 8, 2015 9:55:38 PM Subject: Re: [Gluster-users] Glusterfs performance tweaks Hi Vijay, If i run the same command directly on the brick... What does this mean then? Running directly on the brick to me means running directly on the SSD. The command below is the same thing as above, what changed? -b [root@cpu01 1]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 16.8022 s, 16.0 MB/s [root@cpu01 1]# pwd /bricks/1 [root@cpu01 1]# This is your problem. Gluster is only as fast as its slowest piece, and here your storage is the bottleneck. Being that you get 16 MB to the brick and 12 to gluster that works out to about 25% overhead which is what I would expect with a single thread, single brick, single client scenario. This may have something to do with the way SSDs write? On my SSD at my desk I only get 11.4 MB / sec when I run that DD command: # dd if=/dev/zero of=test bs=64k count=4k oflag=dsync 4096+0 records in 4096+0 records out 268435456 bytes (268 MB) copied, 23.065 s, 11.4 MB/s My thought is that maybe using dsync is forcing the SSD to clean the data or something else before writing to it: http://www.blog.solidstatediskshop.com/2012/how-does-an-ssd-write/ Do your drives support fstrim? It may be worth it to trim before you run and see what results you get. Other than tuning the SSD / OS to perform better on the back end there isn't much we can do from the gluster perspective on that specific DD w/ the dsync flag. -b On Wed, Apr 8, 2015 at 6:44 PM, Vijay Bellur vbel...@redhat.com wrote: On 04/08/2015 02:57 PM, Punit Dambiwal wrote: Hi, I am getting very slow throughput in the glusterfs (dead slow...even SATA is better) ... i am using all SSD in my environment. I have the following setup :- A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed Replicated | replica=2) B. Each server has 24 SSD as bricks…(Without HW Raid | JBOD) C. Each server has 2 Additional ssd for OS… D. Network 2*10G with bonding…(2*E5 CPU and 64GB RAM) Note :- Performance/Throughput slower then Normal SATA 7200 RPM…even i am using all SSD in my ENV.. Gluster Volume options :- +++ Options Reconfigured: performance.nfs.write-behind- window-size: 1024MB performance.io-thread-count: 32 performance.cache-size: 1024MB cluster.quorum-type: auto cluster.server-quorum-type: server diagnostics.count-fop-hits: on diagnostics.latency- measurement:
Re: [Gluster-users] a node diseaper
Hi all, Hi All, Last problem I hope. With my 14 node one node the 8 is present in the gluster volume info tyty and not in the gluster volume status tyty And when I start a volume on the 8 the other don't start, as I have to start on one of the other node but they don't see the 8 and the volume don't accept writing. Si is there a simple way to re-insert that volume like peer detach and and after peer probe of seven node because I have replica and striped volume ? ? I add that information : * the node 8 is on state 6 and the other on node 3 ; * With the same release of glusterfs.; Read you soon. Pierre Léonard ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] gluster volume create
Hi, I wonder what happens when the command 'gluster volume create...' is executed? How is the file system on the brick affected by that command, data and meta-data (extended attributes)? The reason for this question is that I have a strange use case where my 2 (mirrored) servers are restarted. At an early stage of the reboot phase, I have to create a new gluster file system on each server so that the same directory can be used for read access. Later on, I would like to delete these single server volumes and replace them with the mirrored gluster volume I used before the restart. I guess I can restore the previous volume definition from a backup of the gluster configuration files, but I'm worried that the 'gluster volume create...' command might have affected the brick so that it is in a different state compared to before the restart, when the restored gluster configuration was valid. I realize that for this to work, the extended attributes can not change while the mirrored volume is stopped. Any idea if I can use glusterfs like this or am I violating some rules? Regards Andreas ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] self-heal performance regression in 3.6
Hi all, Has anybody observed a performance regression in the self-heal process between 3.4 and 3.6 gluster releases ? The best self-heal performance I achieved was using 3.4.2 and self-heal-window-size=4” on a large amount of data (several TB) with a mix of small and large files (from 100KB to 4MB). Using self-heal-window-size default value (16) on 3.4.2 or any setting (4 or 16) with 3.6.2 lead to 33% (up to 50%) slowdown compared to the best performance baseline with the same dataset. With several TB of data, that requires many extra hours on 3.6.2 for the self-heal to complete. Other than self-heal-window-size parameter, is there any self-heal performance settings or speed-up tricks I could use either on 3.4 or 3.6 ? Any feedback appreciated. Best regards, Florent Monbillard ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] growth of indices when replicated brick fails
Hi, When a replicated brick fails for a long time in a replica volume and there is a lot of writes/deletions on the replicated brick still online, indices keep increasing and lead to a very slow self-heal process. 1. What is the maximum number of indices gluster can support ? If there is a limit, what happens when we reach that limit ? 2. Can we just wipe them out without any impact ? Best Regards, Florent Monbillard ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Question on file reads from a distributed replicated volume
Let us assume that we have a distributed replicated gluster volume with 4 bricks. Brick 1 is replicating its data with Brick 2 and Brick 3 is replicating its data with Brick 4. Suppose a file called aa.txt is present and replicated on Bricks 1 2. A gluster client would like to read this file. Which brick will the file come from? In other words: -Will the file always be fetched from brick 1? -Will the file always be fetched from brick 2? -Will the file be fetched from brick 1 or 2 depending on the attempt? i.e. is there some load balancing where sometimes the file comes from brick 1 and at other times the file will come from brick 2? ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Question on file reads from a distributed replicated volume
- Original Message - From: Bharathram Sivasubramanian bharathram.sivasubraman...@vantrix.com To: Gluster-users@gluster.org Sent: Saturday, April 11, 2015 3:03:48 AM Subject: [Gluster-users] Question on file reads from a distributed replicated volume Let us assume that we have a distributed replicated gluster volume with 4 bricks. Brick 1 is replicating its data with Brick 2 and Brick 3 is replicating its data with Brick 4. Suppose a file called aa.txt is present and replicated on Bricks 1 2. A gluster client would like to read this file. Which brick will the file come from? In other words: - Will the file always be fetched from brick 1? - Will the file always be fetched from brick 2? - Will the file be fetched from brick 1 or 2 depending on the attempt? i.e. is there some load balancing where sometimes the file comes from brick 1 and at other times the file will come from brick 2? A given file will consistently be read from one of the N (in this case 2) replicas, as long as it holds a good copy of the file (that which does not need heal), even with client restart. We call it the read child of the file. Load balancing within a volume is done across different files wherein different files would have different but constant read children. -Krutika ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Question on file reads from a distributed replicated volume
Thanks, Krutika! So, if I understand you correctly...if both Brick 1 and Brick 2 have a good copy of the file, then we can't tell for sure if the file will be read only from Brick 1 all the time and not Brick 2? in other words, for read#1, the file could be read from brick 2 and for another read which could happen a few minutes later, the file could be read from brick 1? From: Krutika Dhananjay kdhan...@redhat.com Sent: Friday, April 10, 2015 9:30 PM To: Bharathram Sivasubramanian Cc: Gluster-users@gluster.org Subject: Re: [Gluster-users] Question on file reads from a distributed replicated volume From: Bharathram Sivasubramanian bharathram.sivasubraman...@vantrix.com To: Gluster-users@gluster.org Sent: Saturday, April 11, 2015 3:03:48 AM Subject: [Gluster-users] Question on file reads from a distributed replicated volume Let us assume that we have a distributed replicated gluster volume with 4 bricks. Brick 1 is replicating its data with Brick 2 and Brick 3 is replicating its data with Brick 4. Suppose a file called aa.txt is present and replicated on Bricks 1 2. A gluster client would like to read this file. Which brick will the file come from? In other words: -Will the file always be fetched from brick 1? -Will the file always be fetched from brick 2? -Will the file be fetched from brick 1 or 2 depending on the attempt? i.e. is there some load balancing where sometimes the file comes from brick 1 and at other times the file will come from brick 2? A given file will consistently be read from one of the N (in this case 2) replicas, as long as it holds a good copy of the file (that which does not need heal), even with client restart. We call it the read child of the file. Load balancing within a volume is done across different files wherein different files would have different but constant read children. -Krutika ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Question on file reads from a distributed replicated volume
- Original Message - From: Bharathram Sivasubramanian bharathram.sivasubraman...@vantrix.com To: Krutika Dhananjay kdhan...@redhat.com Cc: Gluster-users@gluster.org Sent: Saturday, April 11, 2015 7:26:27 AM Subject: Re: [Gluster-users] Question on file reads from a distributed replicated volume Thanks, Krutika! So, if I understand you correctly...if both Brick 1 and Brick 2 have a good copy of the file, then we can't tell for sure if the file will be read only from Brick 1 all the time and not Brick 2? in other words, for read#1, the file could be read from brick 2 and for another read which could happen a few minutes later, the file could be read from brick 1? Nope. For a given file, read requests will always be served from the same brick. This means that in your example, both read #1 and read #2 will served from the same brick, which is the read child of the file. In 3.6.0 and above, the read child of a file is computed by applying a hash function on its GFID, which means it is always guaranteed to give the same value (because the gfid of a file will remain the same throughout its lifetime, needless to say). -Krutika From: Krutika Dhananjay kdhan...@redhat.com Sent: Friday, April 10, 2015 9:30 PM To: Bharathram Sivasubramanian Cc: Gluster-users@gluster.org Subject: Re: [Gluster-users] Question on file reads from a distributed replicated volume - Original Message - From: Bharathram Sivasubramanian bharathram.sivasubraman...@vantrix.com To: Gluster-users@gluster.org Sent: Saturday, April 11, 2015 3:03:48 AM Subject: [Gluster-users] Question on file reads from a distributed replicated volume Let us assume that we have a distributed replicated gluster volume with 4 bricks. Brick 1 is replicating its data with Brick 2 and Brick 3 is replicating its data with Brick 4. Suppose a file called aa.txt is present and replicated on Bricks 1 2. A gluster client would like to read this file. Which brick will the file come from? In other words: - Will the file always be fetched from brick 1? - Will the file always be fetched from brick 2? - Will the file be fetched from brick 1 or 2 depending on the attempt? i.e. is there some load balancing where sometimes the file comes from brick 1 and at other times the file will come from brick 2? A given file will consistently be read from one of the N (in this case 2) replicas, as long as it holds a good copy of the file (that which does not need heal), even with client restart. We call it the read child of the file. Load balancing within a volume is done across different files wherein different files would have different but constant read children. -Krutika ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Question on file reads from a distributed replicated volume
Do, gluster volume set help. There is a pretty good explanation of the read subvolume preferences and options. On April 10, 2015 6:56:27 PM PDT, Bharathram Sivasubramanian bharathram.sivasubraman...@vantrix.com wrote: Thanks, Krutika! So, if I understand you correctly...if both Brick 1 and Brick 2 have a good copy of the file, then we can't tell for sure if the file will be read only from Brick 1 all the time and not Brick 2? in other words, for read#1, the file could be read from brick 2 and for another read which could happen a few minutes later, the file could be read from brick 1? From: Krutika Dhananjay kdhan...@redhat.com Sent: Friday, April 10, 2015 9:30 PM To: Bharathram Sivasubramanian Cc: Gluster-users@gluster.org Subject: Re: [Gluster-users] Question on file reads from a distributed replicated volume From: Bharathram Sivasubramanian bharathram.sivasubraman...@vantrix.com To: Gluster-users@gluster.org Sent: Saturday, April 11, 2015 3:03:48 AM Subject: [Gluster-users] Question on file reads from a distributed replicated volume Let us assume that we have a distributed replicated gluster volume with 4 bricks. Brick 1 is replicating its data with Brick 2 and Brick 3 is replicating its data with Brick 4. Suppose a file called aa.txt is present and replicated on Bricks 1 2. A gluster client would like to read this file. Which brick will the file come from? In other words: -Will the file always be fetched from brick 1? -Will the file always be fetched from brick 2? -Will the file be fetched from brick 1 or 2 depending on the attempt? i.e. is there some load balancing where sometimes the file comes from brick 1 and at other times the file will come from brick 2? A given file will consistently be read from one of the N (in this case 2) replicas, as long as it holds a good copy of the file (that which does not need heal), even with client restart. We call it the read child of the file. Load balancing within a volume is done across different files wherein different files would have different but constant read children. -Krutika ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users -- Sent from my Android device with K-9 Mail. Please excuse my brevity.___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users