Re: [Gluster-users] multi petabyte gluster dispersed for archival?

2020-02-13 Thread Serkan Çoban
Medicine > E: d...@med.cornell.edu > O: 212-746-6305 > F: 212-746-8690 > > > From: Serkan Çoban > Sent: Thursday, February 13, 2020 12:38 PM > To: Douglas Duckworth > Cc: gluster-users@gluster.org > Subject: [EXTERNAL] Re: [Glus

Re: [Gluster-users] multi petabyte gluster dispersed for archival?

2020-02-13 Thread Serkan Çoban
Do not use EC with small files. You cannot tolerate losing a 300TB brick, reconstruction will take ages. When I was using glusterfs reconstruction speed of ec was 10-15MB/sec. If you do not loose bricks you will be ok. On Thu, Feb 13, 2020 at 7:38 PM Douglas Duckworth wrote: > > Hello > > I am

Re: [Gluster-users] Max length for filename

2019-01-28 Thread Serkan Çoban
Filename max is 255 bytes, path name max is 4096 bytes. On Mon, Jan 28, 2019 at 11:33 AM mabi wrote: > > Hello, > > I saw this warning today in my fuse mount client log file: > > [2019-01-28 06:01:25.091232] W [fuse-bridge.c:565:fuse_entry_cbk] > 0-glusterfs-fuse: 530594537: LOOKUP() >

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-09 Thread Serkan Çoban
We ara also using 10TB disks, heal takes 7-8 days. You can play with "cluster.shd-max-threads" setting. It is default 1 I think. I am using it with 4. Below you can find more info: https://access.redhat.com/solutions/882233 On Thu, Jan 10, 2019 at 9:53 AM Hu Bert wrote: > > Hi Mike, > > > We

Re: [Gluster-users] Can glusterd be restarted running on all nodes at once while clients are mounted?

2018-11-25 Thread Serkan Çoban
2500-3000 disks per cluster is maximum usable limit, after that almost nothing works. We are using 2700 disk cluster for cold storage with ec. Be careful on heal operations, i see 1 week /8T heal throughput... On Sun, Nov 25, 2018 at 6:16 PM Andreas Davour wrote: > > On Sun, 25 Nov 2018, Jeevan

Re: [Gluster-users] sharding in glusterfs

2018-09-17 Thread Serkan Çoban
Did you try disperse volume? It may work your workload I think. We are using disperse volumes for archive workloads with 2GB files and I did not encounter any problems. On Mon, Sep 17, 2018 at 1:43 AM Ashayam Gupta wrote: > > Hi All, > > We are currently using glusterfs for storing large files

Re: [Gluster-users] Previously replaced brick not coming up after reboot

2018-08-16 Thread Serkan Çoban
What is your gluster version? There was a bug in 3.10, when you reboot a node some bricks may not come online but it fixed in later versions. On 8/16/18, Hu Bert wrote: > Hi there, > > 2 times i had to replace a brick on 2 different servers; replace went > fine, heal took very long but finally

Re: [Gluster-users] Expanding a distributed disperse volume: some questions about the action plan

2018-06-07 Thread Serkan Çoban
>in order to copy the data from the old volume to the new one, I need a third >machine that can mount both the volumes; it's possible? if yes, which gluster >client version should I use/install on the "bridge" machine? Yes it is possible, you need to install old client version to bridge server,

Re: [Gluster-users] cluster of 3 nodes and san

2018-04-27 Thread Serkan Çoban
>but the Doubt is if I can use glusterfs with a san connected by FC? Yes, just format the volumes with xfs and ready to go For a replica in different DC, be careful about latency. What is the connection between DCs? It can be doable if latency is low. On Fri, Apr 27, 2018 at 4:02 PM, Ricky

[Gluster-users] Fwd: lstat & readlink calls during glusterfsd process startup

2018-04-16 Thread Serkan Çoban
360545 15 select ... -- Forwarded message -- From: Serkan Çoban <cobanser...@gmail.com> Date: Mon, Apr 16, 2018 at 9:20 AM Subject: lstat & readlink calls during glusterfsd process startup To: Gluster Users <gluster-users@gluster.org>

[Gluster-users] lstat & readlink calls during glusterfsd process startup

2018-04-16 Thread Serkan Çoban
Hi all, I am on gluster 3.10.5 with one EC volume 16+4. One of the machines go down previous night and I just fixed it and powered on. When glusterfsd processes started they consume all CPU on the server. strace shows every process goes over in bricks directory and do a lstat & readlink calls.

Re: [Gluster-users] Tune and optimize dispersed cluster

2018-04-03 Thread Serkan Çoban
>Is there a way to tune, optimize a dispersed cluster to make it run better with small read/writes? You should not run disperse volumes with small IO. 2+1 disperse is gaining only %25 over replica 3 with arbiter... On Tue, Apr 3, 2018 at 10:25 AM, Marcus Pedersén wrote:

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-26 Thread Serkan Çoban
Idrac > Enterprise?? > > > On Thu, Feb 22, 2018 at 12:16 AM, Serkan Çoban <cobanser...@gmail.com> > wrote: >> >> "Did you check the BIOS/Power settings? They should be set for high >> performance. >> Also you can try to boot "intel_idle.max_cstate=0"

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-25 Thread Serkan Çoban
I would like to see the steps for reference, can you provide a link or just post them on mail list? On Mon, Feb 26, 2018 at 4:29 AM, TomK wrote: > Hey Guy's, > > A success story instead of a question. > > With your help, managed to get the HA component working with HAPROXY

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-21 Thread Serkan Çoban
"Did you check the BIOS/Power settings? They should be set for high performance. Also you can try to boot "intel_idle.max_cstate=0" kernel command line option to be sure CPUs not entering power saving states. On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson wrote: > > > I have

Re: [Gluster-users] Are there any issues connecting a Gluster client v3.7.6 to a Gluster server v3.5.6?

2018-02-16 Thread Serkan Çoban
Old clients can talk to new server but it is not recommended to use newer clients with old server. On Fri, Feb 16, 2018 at 10:42 PM, Maya Estalilla wrote: > We have been running several Gluster servers using version 3.5.6 for some > time now without issue. We also have

Re: [Gluster-users] strange hostname issue on volume create command with famous Peer in Cluster state error message

2018-02-06 Thread Serkan Çoban
Did you do gluster peer probe? Check out the documentation: http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/ On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydoğan wrote: > Hello, > > i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All

Re: [Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?

2018-02-02 Thread Serkan Çoban
> Thanks. However "gluster v heal volname full" returned the following error >> > message >> > Commit failed on server4. Please check log file for details. >> > >> > I have checked the log files in /var/log/glusterfs on server4 (by grepping >

Re: [Gluster-users] How to trigger a resync of a newly replaced empty brick in replicate config ?

2018-02-01 Thread Serkan Çoban
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe wrote: > Hi, > > > My volume home

Re: [Gluster-users] glusterfs development library

2018-01-15 Thread Serkan Çoban
You should try libgfapi: https://libgfapi-python.readthedocs.io/en/latest/ On Mon, Jan 15, 2018 at 9:01 PM, Marcin Dulak wrote: > Maybe consider extending the functionality of > http://docs.ansible.com/ansible/latest/gluster_volume_module.html? > > Best regards, > >

Re: [Gluster-users] GlusterFS healing questions

2017-11-09 Thread Serkan Çoban
Hi, You can set disperse.shd-max-threads to 2 or 4 in order to make heal faster. This makes my heal times 2-3x faster. Also you can play with disperse.self-heal-window-size to read more bytes at one time, but i did not test it. On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez

Re: [Gluster-users] Gluster Scale Limitations

2017-10-30 Thread Serkan Çoban
Hi, After ~2500 bricks it takes too much time bricks getting online after a reboot. So I think ~2500 bricks is an upper limit per cluster. I have two 40 nodes/19PiB clusters. They have only one big EC volume and used for backup/archive purpose. On Tue, Oct 31, 2017 at 12:51 AM, Mayur Dewaikar

Re: [Gluster-users] Poor gluster performance on large files.

2017-10-30 Thread Serkan Çoban
>Can you please turn OFF client-io-threads as we have seen degradation of performance with io-threads ON on sequential read/writes, random read/writes. May I ask which version is this degradation happened? I tested 3.10 vs 3.12 performance a while ago and saw 2-3x performance lost with 3.12. Is it

Re: [Gluster-users] gluster status

2017-10-13 Thread Serkan Çoban
Which disks/bricks are down: gluster v status vol1 | grep " N " Ongoing heals: gluster v heal info | grep "Number of entries" | grep -v "Number of entries: 0" On Fri, Oct 13, 2017 at 12:59 AM, Gandalf Corvotempesta wrote: > How can I show the current state of a

Re: [Gluster-users] EC 1+2

2017-09-23 Thread Serkan Çoban
m should be power of 2 in m+n where m is data n is redundancy. On Sat, Sep 23, 2017 at 8:00 PM, Gandalf Corvotempesta wrote: > Already read that. > Seems that I have to use a multiple of 512, so 512*(3-2) is 512. > > Seems fine > > Il 23 set 2017 5:00 PM, "Dmitri

Re: [Gluster-users] how to calculate the ideal value for client.event-threads, server.event-threads and performance.io-thread-count?

2017-09-20 Thread Serkan Çoban
Defaults should be fine in your size. In big clusters I usually set event-threads to 4. On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici wrote: > > Dear All, > > I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume > based on the following hardware:

Re: [Gluster-users] how many hosts could be down in a 12x(4+2) distributed dispersed volume?

2017-09-20 Thread Serkan Çoban
If you add bricks to existing volume one host could be down in each three host group, If you recreate the volume with one brick on each host, then two random hosts can be tolerated. Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend volume. If any two servers in each group goes

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-14 Thread Serkan Çoban
I have the %100 CPU usage issue when I restart a glusterd instance and I do not have null client errors in log. The issue was related to number of bricks/servers so I decreased the brick count in volume. It resolved the problem. On Thu, Sep 14, 2017 at 9:02 AM, Sam McLeod

Re: [Gluster-users] 3.10.5 vs 3.12.0 huge performance loss

2017-09-12 Thread Serkan Çoban
should give us clues about what could be > happening. > > On Tue, Sep 12, 2017 at 1:51 PM, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> Hi, >> Servers are in production with 3.10.5, so I cannot provide 3.12 >> related informatio

Re: [Gluster-users] 3.10.5 vs 3.12.0 huge performance loss

2017-09-12 Thread Serkan Çoban
Hi, Servers are in production with 3.10.5, so I cannot provide 3.12 related information anymore. Thanks for help, sorry for inconvenience. ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can I use 3.7.11 server with 3.10.5 client?

2017-09-08 Thread Serkan Çoban
Any suggestions? On Thu, Sep 7, 2017 at 4:35 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > Hi, > > Is it safe to use 3.10.5 client with 3.7.11 server with read-only data > move operation? > Client will have 3.10.5 glusterfs-client packages. It will mount one > vo

[Gluster-users] Can I use 3.7.11 server with 3.10.5 client?

2017-09-07 Thread Serkan Çoban
Hi, Is it safe to use 3.10.5 client with 3.7.11 server with read-only data move operation? Client will have 3.10.5 glusterfs-client packages. It will mount one volume from 3.7.11 cluster and one from 3.10.5 cluster. I will read from 3.7.11 and write to 3.10.5.

Re: [Gluster-users] 3.10.5 vs 3.12.0 huge performance loss

2017-09-06 Thread Serkan Çoban
It is sequential write with file size 2GB. Same behavior observed with 3.11.3 too. On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srang...@redhat.com> wrote: > On 09/06/2017 05:48 AM, Serkan Çoban wrote: >> >> Hi, >> >> Just do some ingestion tests to

[Gluster-users] 3.10.5 vs 3.12.0 huge performance loss

2017-09-06 Thread Serkan Çoban
Hi, Just do some ingestion tests to 40 node 16+4EC 19PB single volume. 100 clients are writing each has 5 threads total 500 threads. With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s I did not

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-05 Thread Serkan Çoban
Ok, I am going for 2x40 server clusters then, thanks for help. On Tue, Sep 5, 2017 at 4:57 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > > > On Tue, Sep 5, 2017 at 6:13 PM, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> Some corrections about the previo

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-05 Thread Serkan Çoban
/libglusterfs.so.0 #2 0x00409020 in main () On Mon, Sep 4, 2017 at 5:50 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > > On Mon, 4 Sep 2017 at 20:04, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> I have been using a 60 server 1560 brick 3.7.11 cluster with

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-04 Thread Serkan Çoban
is still happens without any volumes. So it is not related with brick count I think... On Mon, Sep 4, 2017 at 5:08 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > > > On Mon, Sep 4, 2017 at 5:28 PM, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> >1. On 80

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-04 Thread Serkan Çoban
>> Having the debuginfo package or a debug build helps to resolve the >> function names and/or line numbers. >> -- >> Milind >> >> >> >> On Thu, Aug 24, 2017 at 11:19 AM, Serkan Çoban <cobanser...@gmail.com> >> wrote: >>> >>>

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-03 Thread Serkan Çoban
i usually change event threads to 4. But those logs are from a default installation. On Sun, Sep 3, 2017 at 9:52 PM, Ben Turner <btur...@redhat.com> wrote: > - Original Message - >> From: "Ben Turner" <btur...@redhat.com> >> To: "Serkan Çoban"

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-02 Thread Serkan Çoban
Hi Milind, Anything new about the issue? Can you able to find the problem, anything else you need? I will continue with two clusters each 40 servers, so I will not be able to provide any further info for 80 servers. On Fri, Sep 1, 2017 at 10:30 AM, Serkan Çoban <cobanser...@gmail.com>

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-09-01 Thread Serkan Çoban
ce. > > Having the debuginfo package or a debug build helps to resolve the function > names and/or line numbers. > -- > Milind > > > > On Thu, Aug 24, 2017 at 11:19 AM, Serkan Çoban <cobanser...@gmail.com> > wrote: >> >> Here you can find 10 stack trace sample

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-31 Thread Serkan Çoban
Hi Gaurav, Any improvement about the issue? On Tue, Aug 29, 2017 at 1:57 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > glusterd returned to normal, here is the logs: > https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0 > > > On Tue, Au

Re: [Gluster-users] glfsheal-v0.log Too many open files

2017-08-30 Thread Serkan Çoban
Hi, Any clues where I can change open file limit for process writing glfsheal-v0.log? Which proccess writes to this file? Is it glusterd or glusterfsd or another process? On Tue, Aug 29, 2017 at 3:02 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > Sorry, I send the mail to de

Re: [Gluster-users] glfsheal-v0.log Too many open files

2017-08-29 Thread Serkan Çoban
Sorry, I send the mail to devel group by mistake.. Any help about the below issue? On Tue, Aug 29, 2017 at 3:00 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > Hi, > > When I run gluster v heal v0 info, it gives "v0: Not able to fetch > volfile from glusterd" err

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-29 Thread Serkan Çoban
glusterd returned to normal, here is the logs: https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0 On Tue, Aug 29, 2017 at 1:47 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > Here is the logs after stopping all three volumes and restarting > glu

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-29 Thread Serkan Çoban
provide the logs which led glusterd to hang for all the > cases along with gusterd process utilization. > > > Thanks > Gaurav > > > > > > > On Tue, Aug 29, 2017 at 2:44 PM, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> Here is the requested logs:

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-29 Thread Serkan Çoban
nd-history-logs for these > scenarios: > Scenario1 : 20 servers > Scenario2 : 40 servers > Scenario3: 80 Servers > > > Thanks > Gaurav > > > > On Mon, Aug 28, 2017 at 11:22 AM, Serkan Çoban <cobanser...@gmail.com> > wrote: >> >> Hi Gaura

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-27 Thread Serkan Çoban
Hi Gaurav, Any progress about the problem? On Thursday, August 24, 2017, Serkan Çoban <cobanser...@gmail.com> wrote: > Thank you Gaurav, > Here is more findings: > Problem does not happen using only 20 servers each has 68 bricks. > (peer probe only 20 servers) > If we use 4

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-24 Thread Serkan Çoban
... On Thu, Aug 24, 2017 at 1:33 PM, Gaurav Yadav <gya...@redhat.com> wrote: > > I am working on it and will share my findings as soon as possible. > > > Thanks > Gaurav > > On Thu, Aug 24, 2017 at 3:58 PM, Serkan Çoban <cobanser...@gmail.com> wrote: >> &g

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-24 Thread Serkan Çoban
, Serkan Çoban <cobanser...@gmail.com> wrote: > Here you can find 10 stack trace samples from glusterd. I wait 10 > seconds between each trace. > https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0 > > Content of the first stack trace is here: > > Thread 8

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-23 Thread Serkan Çoban
ukherjee <amukh...@redhat.com> wrote: >> >> Not yet. Gaurav will be taking a look at it tomorrow. >> >> On Wed, 23 Aug 2017 at 20:14, Serkan Çoban <cobanser...@gmail.com> wrote: >>> >>> Hi Atin, >>> >>> Do you have time to check

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-23 Thread Serkan Çoban
Hi Atin, Do you have time to check the logs? On Wed, Aug 23, 2017 at 10:02 AM, Serkan Çoban <cobanser...@gmail.com> wrote: > Same thing happens with 3.12.rc0. This time perf top shows hanging in > libglusterfs.so and below is the glusterd logs, which are different > from 3.10

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-23 Thread Serkan Çoban
ae2c091b1] -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c) [0x7f5ae2c0851c] -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9) [0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument] On Tue, Aug 22, 2017 at 7:00 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > I rebo

Re: [Gluster-users] Brick count limit in a volume

2017-08-22 Thread Serkan Çoban
This is the command line output: Total brick list is larger than a request. Can take (brick_count ) Usage: volume create [stripe ] [replica ] I am testing if a big single volume will work for us. Now I am continuing testing with three volumes each 13PB...

Re: [Gluster-users] Brick count limit in a volume

2017-08-22 Thread Serkan Çoban
Hi, I think this is the line limiting brick count: https://github.com/gluster/glusterfs/blob/c136024613c697fec87aaff3a070862b92c57977/cli/src/cli-cmd-parser.c#L84 Can gluster-devs increase this limit? Should I open a github issue? On Mon, Aug 21, 2017 at 7:01 PM, Serkan Çoban <coban

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-22 Thread Serkan Çoban
sterd to get into a infinite loop of traversing a peer/volume list and > CPU to hog up. Again this is a guess and I've not got a chance to take a > detail look at the logs and the strace output. > > I believe if you get to reboot the node again the problem will disappear. > > On Tue, 22 A

Re: [Gluster-users] Glusterd proccess hangs on reboot

2017-08-22 Thread Serkan Çoban
As an addition perf top shows %80 libc-2.12.so __strcmp_sse42 during glusterd %100 cpu usage Hope this helps... On Tue, Aug 22, 2017 at 2:41 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > Hi there, > > I have a strange problem. > Gluster version in 3.10.5, I am testing ne

[Gluster-users] Glusterd proccess hangs on reboot

2017-08-22 Thread Serkan Çoban
Hi there, I have a strange problem. Gluster version in 3.10.5, I am testing new servers. Gluster configuration is 16+4 EC, I have three volumes, each have 1600 bricks. I can successfully create the cluster and volumes without any problems. I write data to cluster from 100 clients for 12 hours

[Gluster-users] Brick count limit in a volume

2017-08-21 Thread Serkan Çoban
Hi, Gluster version is 3.10.5. I am trying to create a 5500 brick volume, but getting an error stating that bricks is the limit. Is this a known limit? Can I change this with an option? Thanks, Serkan ___ Gluster-users mailing list

[Gluster-users] 3.10.4 packages are missing

2017-08-07 Thread Serkan Çoban
Hi, I cannot find gluster 3.10.4 packages in centos repos. 3.11 release is also nonexistent. Can anyone fix this please? ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Multi petabyte gluster

2017-06-30 Thread Serkan Çoban
ed by matrix operations which scale as the square > of the number of data stripes. There are some savings because of larger > data chunks but we ended up using 8+3 and heal times are about half compared > to 16+3. > > -Alastair > > On 30 June 2017 at 02:22, Serkan Ço

Re: [Gluster-users] Multi petabyte gluster

2017-06-30 Thread Serkan Çoban
ear-cold > storage. > > > Anything, from your experience, to keep in mind while planning large > installations? > > > Sent from my Verizon, Samsung Galaxy smartphone > > ---- Original message > From: Serkan Çoban <cobanser...@gmail.com> >

Re: [Gluster-users] Multi petabyte gluster

2017-06-29 Thread Serkan Çoban
I am currently using 10PB single volume without problems. 40PB is on the way. EC is working fine. You need to plan ahead with large installations like this. Do complete workload tests and make sure your use case is suitable for EC. On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak

Re: [Gluster-users] Heal operation detail of EC volumes

2017-06-01 Thread Serkan Çoban
r name I should look for? On Thu, Jun 1, 2017 at 10:30 AM, Xavier Hernandez <xhernan...@datalab.es> wrote: > Hi Serkan, > > On 30/05/17 10:22, Serkan Çoban wrote: >> >> Ok I understand that heal operation takes place on server side. In >> this case I should see X KB

Re: [Gluster-users] Heal operation detail of EC volumes

2017-05-30 Thread Serkan Çoban
s pick these tasks and execute it. That is when actual > read/write for > heal happens. > > > > From: "Serkan Çoban" <cobanser...@gmail.com> > To: "Ashish Pandey" <aspan...@redhat.com> > Cc: "Gluster Users" <gluster-users@gluster.org> >

Re: [Gluster-users] Heal operation detail of EC volumes

2017-05-29 Thread Serkan Çoban
HD process right? Does brick process has any role in EC calculations? On Mon, May 29, 2017 at 3:32 PM, Ashish Pandey <aspan...@redhat.com> wrote: > > > ____ > From: "Serkan Çoban" <cobanser...@gmail.com> > To: "Gluster User

[Gluster-users] Heal operation detail of EC volumes

2017-05-29 Thread Serkan Çoban
Hi, When a brick fails in EC, What is the healing read/write data path? Which processes do the operations? Assume a 2GB file is being healed in 16+4 EC configuration. I was thinking that SHD deamon on failed brick host will read 2GB from network and reconstruct its 100MB chunk and write it on to

Re: [Gluster-users] Hash function

2017-05-28 Thread Serkan Çoban
Hashing is done on filenames, but each directory has its own hash range. So same filename under different directories mapped to different bricks. On Sun, May 28, 2017 at 1:00 PM, Stephen Remde wrote: > Hi all, > > Am I correct in thinking the has used to determine the

Re: [Gluster-users] Bad perf for small files on large EC volume

2017-05-08 Thread Serkan Çoban
There are 300M files right I am not counting wrong? With that file profile I would never use EC in first place. Maybe you can pack the files into tar archives or similar before migrating to gluster? It will take ages to heal a drive with that file count... On Mon, May 8, 2017 at 3:59 PM, Ingard

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-08 Thread Serkan Çoban
>What network do you have? We have 2X10G bonded interfaces on each server. Thanks to Xavier for detailed explanation of EC details. On Sat, May 6, 2017 at 2:20 AM, Alastair Neil <ajneil.t...@gmail.com> wrote: > What network do you have? > > > On 5 May 2017 at 09:51, S

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Serkan Çoban
...@redhat.com> wrote: >> >> >> >> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <cobanser...@gmail.com> >> wrote: >>> >>> It is the over all time, 8TB data disk healed 2x faster in 8+2 >>> configuration. >> >> >> Wow, tha

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Serkan Çoban
It is the over all time, 8TB data disk healed 2x faster in 8+2 configuration. On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri <pkara...@redhat.com> wrote: > > > On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> Healing get

Re: [Gluster-users] disperse volume brick counts limits in RHES

2017-05-05 Thread Serkan Çoban
Healing gets slower as you increase m in m+n configuration. We are using 16+4 configuration without any problems other then heal speed. I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on 8+2 is faster by 2x. On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-04-27 Thread Serkan Çoban
I think this is he fix Gandalf asking for: https://github.com/gluster/glusterfs/commit/6e3054b42f9aef1e35b493fbb002ec47e1ba27ce On Thu, Apr 27, 2017 at 2:03 PM, Pranith Kumar Karampuri wrote: > I am very positive about the two things I told you. These are the latest >

Re: [Gluster-users] [Gluster-devel] Announcing release 3.11 : Scope, schedule and feature tracking

2017-04-25 Thread Serkan Çoban
How this affect CPU usage? Does it read whole file and calculates a hash after it is being written? Will this patch land in 3.10.x? On Tue, Apr 25, 2017 at 10:32 AM, Kotresh Hiremath Ravishankar wrote: > Hi > > https://github.com/gluster/glusterfs/issues/188 is merged in

Re: [Gluster-users] Add single server

2017-04-22 Thread Serkan Çoban
In EC if you have m+n configuration, you have to grow by m+n bricks. If you have 6+2 you need to add another 8 bricks. On Sat, Apr 22, 2017 at 3:02 PM, Gandalf Corvotempesta wrote: > I'm still trying to figure out if adding a single server to an > existing

Re: [Gluster-users] supermicro 60/90 bay servers

2017-04-20 Thread Serkan Çoban
What is your use case? Disperse is good for archive workloads, big files. I suggest you to buy 10 servers and use 8+2 EC configuration. This way you can handle two node failures. We are using 28 disk servers but our next cluster will use 68 disk servers. On Thu, Apr 20, 2017 at 1:19 PM, Ingard

Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1

2017-04-18 Thread Serkan Çoban
should be faster though. On Tue, Apr 18, 2017 at 1:38 PM, Gandalf Corvotempesta <gandalf.corvotempe...@gmail.com> wrote: > 2017-04-18 9:36 GMT+02:00 Serkan Çoban <cobanser...@gmail.com>: >> Nope, healing speed is 10MB/sec/brick, each brick heals with this >> speed, so one b

Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1

2017-04-18 Thread Serkan Çoban
production environment it differs. > > > > On Tue, Apr 18, 2017 at 12:47 PM, Serkan Çoban <cobanser...@gmail.com> > wrote: > >> You can increase heal speed by running below command from a client: >> find /mnt/gluster -d -exec getfattr -h -n trusted.ec.heal

Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1

2017-04-18 Thread Serkan Çoban
speed is 10MB/sec/brick, each brick heals with this speed, so one brick or one server each will heal in one week... On Tue, Apr 18, 2017 at 10:20 AM, Gandalf Corvotempesta <gandalf.corvotempe...@gmail.com> wrote: > 2017-04-18 9:17 GMT+02:00 Serkan Çoban <cobanser...@gmail.com>: >>

Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1

2017-04-18 Thread Serkan Çoban
when comparing to other disks in set. > > I saw in another thread you also had the issue with heal speed, did you face > any issue in reading data from rest of the good bricks in the set. like slow > read < KB/s. > > On Mon, Apr 17, 2017 at 2:05 PM, Serkan Çoban <cobanser..

Re: [Gluster-users] How to Speed UP heal process in Glusterfs 3.10.1

2017-04-17 Thread Serkan Çoban
Normally I see 8-10MB/sec/brick heal speed with gluster 3.7.11. I tested parallel heal for disperse with version 3.9.0 and see that it increase the heal speed to 20-40MB/sec I tested with shd-max-threads 2,4,8 and saw that best performance achieved with 2 or 4 threads. you can try to start with 2

Re: [Gluster-users] Backups

2017-03-23 Thread Serkan Çoban
Assuming a backup window of 12 hours, you need to send data at 25GB/s to backup solution. Using 10G Ethernet on hosts you need at least 25 host to handle 25GB/s. You can create an EC gluster cluster that can handle this rates, or you just backup valuable data from inside VMs using open source

Re: [Gluster-users] advice needed on configuring large gluster cluster

2017-03-15 Thread Serkan Çoban
Please find my comments inline. > Hi > > we have a new gluster cluster we are planning on deploying. We will have 24 > nodes each with JBOD, 39 8TB drives and 6, 900GB SSDs, and FDR IB > > We will not be using all of this as one volume , but I thought initially of > using a distributed disperse

Re: [Gluster-users] Maximum bricks per volume recommendation

2017-03-06 Thread Serkan Çoban
, 2017 at 11:54 AM, qingwei wei <tcheng...@gmail.com> wrote: > Hi Serkan, > > Thanks for the information. So 150 bricks should still be good. So > what number of bricks is consider excessive? > > Cw > > On Mon, Mar 6, 2017 at 3:14 PM, Serkan Çoban <cobanser...@

Re: [Gluster-users] Maximum bricks per volume recommendation

2017-03-05 Thread Serkan Çoban
Putting lots of bricks in a volume have side affects. Slow meta operations, slow gluster commands executions, etc. But 150 bricks are not that much. On Mon, Mar 6, 2017 at 9:41 AM, qingwei wei wrote: > Hi, > > Is there hard limit on the maximum number of bricks per Gluster >

Re: [Gluster-users] Question about heterogeneous bricks

2017-02-21 Thread Serkan Çoban
I think, gluster1 and gluster2 became a replica pair. Smallest size between them is affective size (1GB) Same for gluster3 and gluster4 (3GB). Total 4GB space available. This is just a guest though.. On Tue, Feb 21, 2017 at 1:18 PM, Daniele Antolini wrote: > Hi all, > > first

Re: [Gluster-users] Gluster Disks configuration

2017-02-18 Thread Serkan Çoban
AFAIK, LVM is needed only if you use snapshots. Other then that, you do not need it. You can use RAID if you don't mind extra lost space. You can test both configs and then choose the right one for your workload. On Sat, Feb 18, 2017 at 10:03 PM, Mahdi Adnan wrote: >

Re: [Gluster-users] Advice for sizing a POC

2017-02-18 Thread Serkan Çoban
With 1GB/file size you should definitely try JBOD with disperse volumes. Gluster can easily get 1GB/per node network throughput using disperse volumes. We use 26 disks/node without problems and planning to use 90 disk/node. I don't think you'll need SSD caching for sequential read heavy

Re: [Gluster-users] 90 Brick/Server suggestions?

2017-02-17 Thread Serkan Çoban
>Any particular reason for this, other than maximising space by avoiding two >layers of RAID/redundancy? Yes that's right we can get 720TB net usable space per server with 90*10TB disks. Any RAID layer will cost too much.. On Fri, Feb 17, 2017 at 6:13 PM, Gambit15

Re: [Gluster-users] 90 Brick/Server suggestions?

2017-02-16 Thread Serkan Çoban
>We have 12 on order. Actually the DSS7000 has two nodes in the chassis, >and each accesses 45 bricks. We will be using an erasure code scheme >probably 24:3 or 24:4, we have not sat down and really thought about the >exact scheme we will use. If we cannot get 1 node/90 disk configuration, we

[Gluster-users] 90 Brick/Server suggestions?

2017-02-15 Thread Serkan Çoban
Hi, We are evaluating dell DSS7000 chassis with 90 disks. Has anyone used that much brick per server? Any suggestions, advices? Thanks, Serkan ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] Gluster anc balance-alb

2017-02-14 Thread Serkan Çoban
In balanced-alb mode you should see nearly equal TX size, but something wrong with your statistics. RX balanced by intercepting MAC address for ARP replays, so in theory if you have enough clients you should have equally balanced RX. I am also using balanced-alb with 60 gluster servers and nearly

Re: [Gluster-users] Notice: https://download.gluster.org:/pub/gluster/glusterfs/LATEST has changed

2017-01-01 Thread Serkan Çoban
, if I upgrade to 3.9 and it gives problems is it safe to downgrade to 3.7.11? Any suggestions? On Sat, Nov 19, 2016 at 9:12 PM, Serkan Çoban <cobanser...@gmail.com> wrote: > Hi, > > Sorry for late reply. I think I will wait for 3.10 LTS release to try > it. I am on 3.7.11 and it is

Re: [Gluster-users] Dispersed volume and auto-heal

2016-12-07 Thread Serkan Çoban
No, you should replace the brick. On Wed, Dec 7, 2016 at 1:02 PM, Cedric Lemarchand wrote: > Hello, > > Is gluster able to auto-heal when some bricks are lost ? by auto-heal I mean > that losted parity are re-generated on bricks that are still available in > order to

Re: [Gluster-users] DISPERSED VOLUME

2016-11-25 Thread Serkan Çoban
I think you should try with a bigger file.1,10,100,1000KB? Small files might just being replicated to bricks...(Just a guess..) On Fri, Nov 25, 2016 at 12:41 PM, Alexandre Blanca wrote: > Hi, > > I am a beginner in distributed file systems and I currently work on >

Re: [Gluster-users] Notice: https://download.gluster.org:/pub/gluster/glusterfs/LATEST has changed

2016-11-19 Thread Serkan Çoban
Hi, Sorry for late reply. I think I will wait for 3.10 LTS release to try it. I am on 3.7.11 and it is very stable for us. On Thu, Nov 17, 2016 at 1:05 PM, Pranith Kumar Karampuri <pkara...@redhat.com> wrote: > > > On Wed, Nov 16, 2016 at 11:47 PM, Serkan Çoban <cobanser...@

Re: [Gluster-users] question about glusterfs version migrate

2016-11-16 Thread Serkan Çoban
Below link has changes in each release. https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes On Wed, Nov 16, 2016 at 11:49 AM, songxin wrote: > Hi, > I am planning to migrate from gluster 3.7.6 to gluster 3.7.10. > So I have two questions below. > 1.How

Re: [Gluster-users] Looking for use cases / opinions

2016-11-09 Thread Serkan Çoban
or zfs and do you have and SSDs in there? > > On 9 November 2016 at 06:17, Serkan Çoban <cobanser...@gmail.com> wrote: >> >> Hi, I am using 26x8TB disks per server. There are 60 servers in gluster >> cluster. >> Each disk is a brick and configuration is 16+4 E

Re: [Gluster-users] Looking for use cases / opinions

2016-11-09 Thread Serkan Çoban
Hi, I am using 26x8TB disks per server. There are 60 servers in gluster cluster. Each disk is a brick and configuration is 16+4 EC, 9PB single volume. Clients are using fuse mounts. Even with 1-2K files in a directory, ls from clients takes ~60 secs. So If you are sensitive to metadata operations,

  1   2   >