Medicine
> E: d...@med.cornell.edu
> O: 212-746-6305
> F: 212-746-8690
>
>
> From: Serkan Çoban
> Sent: Thursday, February 13, 2020 12:38 PM
> To: Douglas Duckworth
> Cc: gluster-users@gluster.org
> Subject: [EXTERNAL] Re: [Glus
Do not use EC with small files. You cannot tolerate losing a 300TB
brick, reconstruction will take ages. When I was using glusterfs
reconstruction speed of ec was 10-15MB/sec. If you do not loose bricks
you will be ok.
On Thu, Feb 13, 2020 at 7:38 PM Douglas Duckworth
wrote:
>
> Hello
>
> I am
Filename max is 255 bytes, path name max is 4096 bytes.
On Mon, Jan 28, 2019 at 11:33 AM mabi wrote:
>
> Hello,
>
> I saw this warning today in my fuse mount client log file:
>
> [2019-01-28 06:01:25.091232] W [fuse-bridge.c:565:fuse_entry_cbk]
> 0-glusterfs-fuse: 530594537: LOOKUP()
>
We ara also using 10TB disks, heal takes 7-8 days.
You can play with "cluster.shd-max-threads" setting. It is default 1 I
think. I am using it with 4.
Below you can find more info:
https://access.redhat.com/solutions/882233
On Thu, Jan 10, 2019 at 9:53 AM Hu Bert wrote:
>
> Hi Mike,
>
> > We
2500-3000 disks per cluster is maximum usable limit, after that almost
nothing works.
We are using 2700 disk cluster for cold storage with ec.
Be careful on heal operations, i see 1 week /8T heal throughput...
On Sun, Nov 25, 2018 at 6:16 PM Andreas Davour wrote:
>
> On Sun, 25 Nov 2018, Jeevan
Did you try disperse volume? It may work your workload I think.
We are using disperse volumes for archive workloads with 2GB files and
I did not encounter any problems.
On Mon, Sep 17, 2018 at 1:43 AM Ashayam Gupta
wrote:
>
> Hi All,
>
> We are currently using glusterfs for storing large files
What is your gluster version? There was a bug in 3.10, when you reboot
a node some bricks may not come online but it fixed in later versions.
On 8/16/18, Hu Bert wrote:
> Hi there,
>
> 2 times i had to replace a brick on 2 different servers; replace went
> fine, heal took very long but finally
>in order to copy the data from the old volume to the new one, I need a third
>machine that can mount both the volumes; it's possible? if yes, which gluster
>client version should I use/install on the "bridge" machine?
Yes it is possible, you need to install old client version to bridge
server,
>but the Doubt is if I can use glusterfs with a san connected by FC?
Yes, just format the volumes with xfs and ready to go
For a replica in different DC, be careful about latency. What is the
connection between DCs?
It can be doable if latency is low.
On Fri, Apr 27, 2018 at 4:02 PM, Ricky
360545 15 select
...
-- Forwarded message --
From: Serkan Çoban <cobanser...@gmail.com>
Date: Mon, Apr 16, 2018 at 9:20 AM
Subject: lstat & readlink calls during glusterfsd process startup
To: Gluster Users <gluster-users@gluster.org>
Hi all,
I am on gluster 3.10.5 with one EC volume 16+4.
One of the machines go down previous night and I just fixed it and powered on.
When glusterfsd processes started they consume all CPU on the server.
strace shows every process goes over in bricks directory and do a
lstat & readlink calls.
>Is there a way to tune, optimize a dispersed cluster to make it
run better with small read/writes?
You should not run disperse volumes with small IO.
2+1 disperse is gaining only %25 over replica 3 with arbiter...
On Tue, Apr 3, 2018 at 10:25 AM, Marcus Pedersén wrote:
Idrac
> Enterprise??
>
>
> On Thu, Feb 22, 2018 at 12:16 AM, Serkan Çoban <cobanser...@gmail.com>
> wrote:
>>
>> "Did you check the BIOS/Power settings? They should be set for high
>> performance.
>> Also you can try to boot "intel_idle.max_cstate=0"
I would like to see the steps for reference, can you provide a link or
just post them on mail list?
On Mon, Feb 26, 2018 at 4:29 AM, TomK wrote:
> Hey Guy's,
>
> A success story instead of a question.
>
> With your help, managed to get the HA component working with HAPROXY
"Did you check the BIOS/Power settings? They should be set for high performance.
Also you can try to boot "intel_idle.max_cstate=0" kernel command line
option to be sure CPUs not entering power saving states.
On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson wrote:
>
>
> I have
Old clients can talk to new server but it is not recommended to use
newer clients with old server.
On Fri, Feb 16, 2018 at 10:42 PM, Maya Estalilla
wrote:
> We have been running several Gluster servers using version 3.5.6 for some
> time now without issue. We also have
Did you do gluster peer probe? Check out the documentation:
http://docs.gluster.org/en/latest/Administrator%20Guide/Storage%20Pools/
On Tue, Feb 6, 2018 at 5:01 PM, Ercan Aydoğan wrote:
> Hello,
>
> i installed glusterfs 3.11.3 version 3 nodes ubuntu 16.04 machine. All
> Thanks. However "gluster v heal volname full" returned the following error
>> > message
>> > Commit failed on server4. Please check log file for details.
>> >
>> > I have checked the log files in /var/log/glusterfs on server4 (by grepping
>
You do not need to reset brick if brick path does not change. Replace
the brick format and mount, then gluster v start volname force.
To start self heal just run gluster v heal volname full.
On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe wrote:
> Hi,
>
>
> My volume home
You should try libgfapi: https://libgfapi-python.readthedocs.io/en/latest/
On Mon, Jan 15, 2018 at 9:01 PM, Marcin Dulak wrote:
> Maybe consider extending the functionality of
> http://docs.ansible.com/ansible/latest/gluster_volume_module.html?
>
> Best regards,
>
>
Hi,
You can set disperse.shd-max-threads to 2 or 4 in order to make heal
faster. This makes my heal times 2-3x faster.
Also you can play with disperse.self-heal-window-size to read more
bytes at one time, but i did not test it.
On Thu, Nov 9, 2017 at 4:47 PM, Xavi Hernandez
Hi, After ~2500 bricks it takes too much time bricks getting online
after a reboot. So I think ~2500 bricks is an upper limit per cluster.
I have two 40 nodes/19PiB clusters. They have only one big EC volume
and used for backup/archive purpose.
On Tue, Oct 31, 2017 at 12:51 AM, Mayur Dewaikar
>Can you please turn OFF client-io-threads as we have seen degradation of
performance with io-threads ON on sequential read/writes, random
read/writes.
May I ask which version is this degradation happened? I tested 3.10 vs 3.12
performance a while ago and saw 2-3x performance lost with 3.12. Is it
Which disks/bricks are down: gluster v status vol1 | grep " N "
Ongoing heals: gluster v heal info | grep "Number of entries" | grep
-v "Number of entries: 0"
On Fri, Oct 13, 2017 at 12:59 AM, Gandalf Corvotempesta
wrote:
> How can I show the current state of a
m should be power of 2 in m+n where m is data n is redundancy.
On Sat, Sep 23, 2017 at 8:00 PM, Gandalf Corvotempesta
wrote:
> Already read that.
> Seems that I have to use a multiple of 512, so 512*(3-2) is 512.
>
> Seems fine
>
> Il 23 set 2017 5:00 PM, "Dmitri
Defaults should be fine in your size. In big clusters I usually set
event-threads to 4.
On Mon, Sep 18, 2017 at 10:39 PM, Mauro Tridici wrote:
>
> Dear All,
>
> I just implemented a (6x(4+2)) DISTRIBUTED DISPERSED gluster (v.3.10) volume
> based on the following hardware:
If you add bricks to existing volume one host could be down in each
three host group, If you recreate the volume with one brick on each
host, then two random hosts can be tolerated.
Assume s1,s2,s3 are current servers and you add s4,s5,s6 and extend
volume. If any two servers in each group goes
I have the %100 CPU usage issue when I restart a glusterd instance and
I do not have null client errors in log.
The issue was related to number of bricks/servers so I decreased the
brick count in volume. It resolved the problem.
On Thu, Sep 14, 2017 at 9:02 AM, Sam McLeod
should give us clues about what could be
> happening.
>
> On Tue, Sep 12, 2017 at 1:51 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
>> Hi,
>> Servers are in production with 3.10.5, so I cannot provide 3.12
>> related informatio
Hi,
Servers are in production with 3.10.5, so I cannot provide 3.12
related information anymore.
Thanks for help, sorry for inconvenience.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
Any suggestions?
On Thu, Sep 7, 2017 at 4:35 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Hi,
>
> Is it safe to use 3.10.5 client with 3.7.11 server with read-only data
> move operation?
> Client will have 3.10.5 glusterfs-client packages. It will mount one
> vo
Hi,
Is it safe to use 3.10.5 client with 3.7.11 server with read-only data
move operation?
Client will have 3.10.5 glusterfs-client packages. It will mount one
volume from 3.7.11 cluster and one from 3.10.5 cluster. I will read
from 3.7.11 and write to 3.10.5.
It is sequential write with file size 2GB. Same behavior observed with
3.11.3 too.
On Thu, Sep 7, 2017 at 12:43 AM, Shyam Ranganathan <srang...@redhat.com> wrote:
> On 09/06/2017 05:48 AM, Serkan Çoban wrote:
>>
>> Hi,
>>
>> Just do some ingestion tests to
Hi,
Just do some ingestion tests to 40 node 16+4EC 19PB single volume.
100 clients are writing each has 5 threads total 500 threads.
With 3.10.5 each server has 800MB/s network traffic, cluster total is 32GB/s
With 3.12.0 each server has 200MB/s network traffic, cluster total is 8GB/s
I did not
Ok, I am going for 2x40 server clusters then, thanks for help.
On Tue, Sep 5, 2017 at 4:57 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>
>
> On Tue, Sep 5, 2017 at 6:13 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
>> Some corrections about the previo
/libglusterfs.so.0
#2 0x00409020 in main ()
On Mon, Sep 4, 2017 at 5:50 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>
> On Mon, 4 Sep 2017 at 20:04, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
>> I have been using a 60 server 1560 brick 3.7.11 cluster with
is still happens without any volumes. So it is not
related with brick count I think...
On Mon, Sep 4, 2017 at 5:08 PM, Atin Mukherjee <amukh...@redhat.com> wrote:
>
>
> On Mon, Sep 4, 2017 at 5:28 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
>> >1. On 80
>> Having the debuginfo package or a debug build helps to resolve the
>> function names and/or line numbers.
>> --
>> Milind
>>
>>
>>
>> On Thu, Aug 24, 2017 at 11:19 AM, Serkan Çoban <cobanser...@gmail.com>
>> wrote:
>>>
>>>
i usually change event threads to 4. But those logs are from a default
installation.
On Sun, Sep 3, 2017 at 9:52 PM, Ben Turner <btur...@redhat.com> wrote:
> - Original Message -
>> From: "Ben Turner" <btur...@redhat.com>
>> To: "Serkan Çoban"
Hi Milind,
Anything new about the issue? Can you able to find the problem,
anything else you need?
I will continue with two clusters each 40 servers, so I will not be
able to provide any further info for 80 servers.
On Fri, Sep 1, 2017 at 10:30 AM, Serkan Çoban <cobanser...@gmail.com>
ce.
>
> Having the debuginfo package or a debug build helps to resolve the function
> names and/or line numbers.
> --
> Milind
>
>
>
> On Thu, Aug 24, 2017 at 11:19 AM, Serkan Çoban <cobanser...@gmail.com>
> wrote:
>>
>> Here you can find 10 stack trace sample
Hi Gaurav,
Any improvement about the issue?
On Tue, Aug 29, 2017 at 1:57 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> glusterd returned to normal, here is the logs:
> https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0
>
>
> On Tue, Au
Hi,
Any clues where I can change open file limit for process writing
glfsheal-v0.log?
Which proccess writes to this file? Is it glusterd or glusterfsd or
another process?
On Tue, Aug 29, 2017 at 3:02 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Sorry, I send the mail to de
Sorry, I send the mail to devel group by mistake..
Any help about the below issue?
On Tue, Aug 29, 2017 at 3:00 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Hi,
>
> When I run gluster v heal v0 info, it gives "v0: Not able to fetch
> volfile from glusterd" err
glusterd returned to normal, here is the logs:
https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0
On Tue, Aug 29, 2017 at 1:47 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Here is the logs after stopping all three volumes and restarting
> glu
provide the logs which led glusterd to hang for all the
> cases along with gusterd process utilization.
>
>
> Thanks
> Gaurav
>
>
>
>
>
>
> On Tue, Aug 29, 2017 at 2:44 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
>> Here is the requested logs:
nd-history-logs for these
> scenarios:
> Scenario1 : 20 servers
> Scenario2 : 40 servers
> Scenario3: 80 Servers
>
>
> Thanks
> Gaurav
>
>
>
> On Mon, Aug 28, 2017 at 11:22 AM, Serkan Çoban <cobanser...@gmail.com>
> wrote:
>>
>> Hi Gaura
Hi Gaurav,
Any progress about the problem?
On Thursday, August 24, 2017, Serkan Çoban <cobanser...@gmail.com> wrote:
> Thank you Gaurav,
> Here is more findings:
> Problem does not happen using only 20 servers each has 68 bricks.
> (peer probe only 20 servers)
> If we use 4
...
On Thu, Aug 24, 2017 at 1:33 PM, Gaurav Yadav <gya...@redhat.com> wrote:
>
> I am working on it and will share my findings as soon as possible.
>
>
> Thanks
> Gaurav
>
> On Thu, Aug 24, 2017 at 3:58 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
&g
, Serkan Çoban <cobanser...@gmail.com> wrote:
> Here you can find 10 stack trace samples from glusterd. I wait 10
> seconds between each trace.
> https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0
>
> Content of the first stack trace is here:
>
> Thread 8
ukherjee <amukh...@redhat.com> wrote:
>>
>> Not yet. Gaurav will be taking a look at it tomorrow.
>>
>> On Wed, 23 Aug 2017 at 20:14, Serkan Çoban <cobanser...@gmail.com> wrote:
>>>
>>> Hi Atin,
>>>
>>> Do you have time to check
Hi Atin,
Do you have time to check the logs?
On Wed, Aug 23, 2017 at 10:02 AM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Same thing happens with 3.12.rc0. This time perf top shows hanging in
> libglusterfs.so and below is the glusterd logs, which are different
> from 3.10
ae2c091b1] -->/usr/lib64/libgfrpc.so.0(rpcsvc_request_init+0x9c)
[0x7f5ae2c0851c] -->/usr/lib64/libglusterfs.so.0(gf_client_ref+0x1a9)
[0x7f5ae2ea3949] ) 0-client_t: null client [Invalid argument]
On Tue, Aug 22, 2017 at 7:00 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> I rebo
This is the command line output:
Total brick list is larger than a request. Can take (brick_count )
Usage: volume create [stripe ] [replica ]
I am testing if a big single volume will work for us. Now I am
continuing testing with three volumes each 13PB...
Hi, I think this is the line limiting brick count:
https://github.com/gluster/glusterfs/blob/c136024613c697fec87aaff3a070862b92c57977/cli/src/cli-cmd-parser.c#L84
Can gluster-devs increase this limit? Should I open a github issue?
On Mon, Aug 21, 2017 at 7:01 PM, Serkan Çoban <coban
sterd to get into a infinite loop of traversing a peer/volume list and
> CPU to hog up. Again this is a guess and I've not got a chance to take a
> detail look at the logs and the strace output.
>
> I believe if you get to reboot the node again the problem will disappear.
>
> On Tue, 22 A
As an addition perf top shows %80 libc-2.12.so __strcmp_sse42 during
glusterd %100 cpu usage
Hope this helps...
On Tue, Aug 22, 2017 at 2:41 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Hi there,
>
> I have a strange problem.
> Gluster version in 3.10.5, I am testing ne
Hi there,
I have a strange problem.
Gluster version in 3.10.5, I am testing new servers. Gluster
configuration is 16+4 EC, I have three volumes, each have 1600 bricks.
I can successfully create the cluster and volumes without any
problems. I write data to cluster from 100 clients for 12 hours
Hi,
Gluster version is 3.10.5. I am trying to create a 5500 brick volume,
but getting an error stating that bricks is the limit. Is this a
known limit? Can I change this with an option?
Thanks,
Serkan
___
Gluster-users mailing list
Hi,
I cannot find gluster 3.10.4 packages in centos repos. 3.11 release is
also nonexistent. Can anyone fix this please?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
ed by matrix operations which scale as the square
> of the number of data stripes. There are some savings because of larger
> data chunks but we ended up using 8+3 and heal times are about half compared
> to 16+3.
>
> -Alastair
>
> On 30 June 2017 at 02:22, Serkan Ço
ear-cold
> storage.
>
>
> Anything, from your experience, to keep in mind while planning large
> installations?
>
>
> Sent from my Verizon, Samsung Galaxy smartphone
>
> ---- Original message
> From: Serkan Çoban <cobanser...@gmail.com>
>
I am currently using 10PB single volume without problems. 40PB is on
the way. EC is working fine.
You need to plan ahead with large installations like this. Do complete
workload tests and make sure your use case is suitable for EC.
On Wed, Jun 28, 2017 at 11:18 PM, Jason Kiebzak
r name I should look for?
On Thu, Jun 1, 2017 at 10:30 AM, Xavier Hernandez <xhernan...@datalab.es> wrote:
> Hi Serkan,
>
> On 30/05/17 10:22, Serkan Çoban wrote:
>>
>> Ok I understand that heal operation takes place on server side. In
>> this case I should see X KB
s pick these tasks and execute it. That is when actual
> read/write for
> heal happens.
>
>
>
> From: "Serkan Çoban" <cobanser...@gmail.com>
> To: "Ashish Pandey" <aspan...@redhat.com>
> Cc: "Gluster Users" <gluster-users@gluster.org>
>
HD process right?
Does brick process has any role in EC calculations?
On Mon, May 29, 2017 at 3:32 PM, Ashish Pandey <aspan...@redhat.com> wrote:
>
>
> ____
> From: "Serkan Çoban" <cobanser...@gmail.com>
> To: "Gluster User
Hi,
When a brick fails in EC, What is the healing read/write data path?
Which processes do the operations?
Assume a 2GB file is being healed in 16+4 EC configuration. I was
thinking that SHD deamon on failed brick host will read 2GB from
network and reconstruct its 100MB chunk and write it on to
Hashing is done on filenames, but each directory has its own hash
range. So same filename under different directories mapped to
different bricks.
On Sun, May 28, 2017 at 1:00 PM, Stephen Remde
wrote:
> Hi all,
>
> Am I correct in thinking the has used to determine the
There are 300M files right I am not counting wrong?
With that file profile I would never use EC in first place.
Maybe you can pack the files into tar archives or similar before
migrating to gluster?
It will take ages to heal a drive with that file count...
On Mon, May 8, 2017 at 3:59 PM, Ingard
>What network do you have?
We have 2X10G bonded interfaces on each server.
Thanks to Xavier for detailed explanation of EC details.
On Sat, May 6, 2017 at 2:20 AM, Alastair Neil <ajneil.t...@gmail.com> wrote:
> What network do you have?
>
>
> On 5 May 2017 at 09:51, S
...@redhat.com> wrote:
>>
>>
>>
>> On Fri, May 5, 2017 at 2:38 PM, Serkan Çoban <cobanser...@gmail.com>
>> wrote:
>>>
>>> It is the over all time, 8TB data disk healed 2x faster in 8+2
>>> configuration.
>>
>>
>> Wow, tha
It is the over all time, 8TB data disk healed 2x faster in 8+2 configuration.
On Fri, May 5, 2017 at 10:00 AM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:
>
>
> On Fri, May 5, 2017 at 11:42 AM, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
>> Healing get
Healing gets slower as you increase m in m+n configuration.
We are using 16+4 configuration without any problems other then heal speed.
I tested heal speed with 8+2 and 16+4 on 3.9.0 and see that heals on
8+2 is faster by 2x.
On Fri, May 5, 2017 at 9:04 AM, Ashish Pandey
I think this is he fix Gandalf asking for:
https://github.com/gluster/glusterfs/commit/6e3054b42f9aef1e35b493fbb002ec47e1ba27ce
On Thu, Apr 27, 2017 at 2:03 PM, Pranith Kumar Karampuri
wrote:
> I am very positive about the two things I told you. These are the latest
>
How this affect CPU usage? Does it read whole file and calculates a
hash after it is being written?
Will this patch land in 3.10.x?
On Tue, Apr 25, 2017 at 10:32 AM, Kotresh Hiremath Ravishankar
wrote:
> Hi
>
> https://github.com/gluster/glusterfs/issues/188 is merged in
In EC if you have m+n configuration, you have to grow by m+n bricks.
If you have 6+2 you need to add another 8 bricks.
On Sat, Apr 22, 2017 at 3:02 PM, Gandalf Corvotempesta
wrote:
> I'm still trying to figure out if adding a single server to an
> existing
What is your use case? Disperse is good for archive workloads, big files.
I suggest you to buy 10 servers and use 8+2 EC configuration. This way you can
handle two node failures. We are using 28 disk servers but our next
cluster will use 68 disk servers.
On Thu, Apr 20, 2017 at 1:19 PM, Ingard
should be faster though.
On Tue, Apr 18, 2017 at 1:38 PM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
> 2017-04-18 9:36 GMT+02:00 Serkan Çoban <cobanser...@gmail.com>:
>> Nope, healing speed is 10MB/sec/brick, each brick heals with this
>> speed, so one b
production environment it differs.
>
>
>
> On Tue, Apr 18, 2017 at 12:47 PM, Serkan Çoban <cobanser...@gmail.com>
> wrote:
>
>> You can increase heal speed by running below command from a client:
>> find /mnt/gluster -d -exec getfattr -h -n trusted.ec.heal
speed is 10MB/sec/brick, each brick heals with this
speed, so one brick or one server each will heal in one week...
On Tue, Apr 18, 2017 at 10:20 AM, Gandalf Corvotempesta
<gandalf.corvotempe...@gmail.com> wrote:
> 2017-04-18 9:17 GMT+02:00 Serkan Çoban <cobanser...@gmail.com>:
>>
when comparing to other disks in set.
>
> I saw in another thread you also had the issue with heal speed, did you face
> any issue in reading data from rest of the good bricks in the set. like slow
> read < KB/s.
>
> On Mon, Apr 17, 2017 at 2:05 PM, Serkan Çoban <cobanser..
Normally I see 8-10MB/sec/brick heal speed with gluster 3.7.11.
I tested parallel heal for disperse with version 3.9.0 and see that it
increase the heal speed to 20-40MB/sec
I tested with shd-max-threads 2,4,8 and saw that best performance
achieved with 2 or 4 threads.
you can try to start with 2
Assuming a backup window of 12 hours, you need to send data at 25GB/s
to backup solution.
Using 10G Ethernet on hosts you need at least 25 host to handle 25GB/s.
You can create an EC gluster cluster that can handle this rates, or
you just backup valuable data from inside VMs using open source
Please find my comments inline.
> Hi
>
> we have a new gluster cluster we are planning on deploying. We will have 24
> nodes each with JBOD, 39 8TB drives and 6, 900GB SSDs, and FDR IB
>
> We will not be using all of this as one volume , but I thought initially of
> using a distributed disperse
, 2017 at 11:54 AM, qingwei wei <tcheng...@gmail.com> wrote:
> Hi Serkan,
>
> Thanks for the information. So 150 bricks should still be good. So
> what number of bricks is consider excessive?
>
> Cw
>
> On Mon, Mar 6, 2017 at 3:14 PM, Serkan Çoban <cobanser...@
Putting lots of bricks in a volume have side affects. Slow meta
operations, slow gluster commands executions, etc.
But 150 bricks are not that much.
On Mon, Mar 6, 2017 at 9:41 AM, qingwei wei wrote:
> Hi,
>
> Is there hard limit on the maximum number of bricks per Gluster
>
I think, gluster1 and gluster2 became a replica pair. Smallest size
between them is affective size (1GB)
Same for gluster3 and gluster4 (3GB). Total 4GB space available. This
is just a guest though..
On Tue, Feb 21, 2017 at 1:18 PM, Daniele Antolini wrote:
> Hi all,
>
> first
AFAIK, LVM is needed only if you use snapshots. Other then that, you
do not need it.
You can use RAID if you don't mind extra lost space.
You can test both configs and then choose the right one for your workload.
On Sat, Feb 18, 2017 at 10:03 PM, Mahdi Adnan wrote:
>
With 1GB/file size you should definitely try JBOD with disperse volumes.
Gluster can easily get 1GB/per node network throughput using disperse volumes.
We use 26 disks/node without problems and planning to use 90 disk/node.
I don't think you'll need SSD caching for sequential read heavy
>Any particular reason for this, other than maximising space by avoiding two
>layers of RAID/redundancy?
Yes that's right we can get 720TB net usable space per server with
90*10TB disks. Any RAID layer will cost too much..
On Fri, Feb 17, 2017 at 6:13 PM, Gambit15
>We have 12 on order. Actually the DSS7000 has two nodes in the chassis,
>and each accesses 45 bricks. We will be using an erasure code scheme
>probably 24:3 or 24:4, we have not sat down and really thought about the
>exact scheme we will use.
If we cannot get 1 node/90 disk configuration, we
Hi,
We are evaluating dell DSS7000 chassis with 90 disks.
Has anyone used that much brick per server?
Any suggestions, advices?
Thanks,
Serkan
___
Gluster-users mailing list
Gluster-users@gluster.org
In balanced-alb mode you should see nearly equal TX size, but
something wrong with your statistics.
RX balanced by intercepting MAC address for ARP replays, so in theory
if you have enough clients you should have equally balanced RX.
I am also using balanced-alb with 60 gluster servers and nearly
, if I upgrade to 3.9 and it gives problems is it safe to
downgrade to 3.7.11? Any suggestions?
On Sat, Nov 19, 2016 at 9:12 PM, Serkan Çoban <cobanser...@gmail.com> wrote:
> Hi,
>
> Sorry for late reply. I think I will wait for 3.10 LTS release to try
> it. I am on 3.7.11 and it is
No, you should replace the brick.
On Wed, Dec 7, 2016 at 1:02 PM, Cedric Lemarchand wrote:
> Hello,
>
> Is gluster able to auto-heal when some bricks are lost ? by auto-heal I mean
> that losted parity are re-generated on bricks that are still available in
> order to
I think you should try with a bigger file.1,10,100,1000KB?
Small files might just being replicated to bricks...(Just a guess..)
On Fri, Nov 25, 2016 at 12:41 PM, Alexandre Blanca
wrote:
> Hi,
>
> I am a beginner in distributed file systems and I currently work on
>
Hi,
Sorry for late reply. I think I will wait for 3.10 LTS release to try
it. I am on 3.7.11 and it is very stable for us.
On Thu, Nov 17, 2016 at 1:05 PM, Pranith Kumar Karampuri
<pkara...@redhat.com> wrote:
>
>
> On Wed, Nov 16, 2016 at 11:47 PM, Serkan Çoban <cobanser...@
Below link has changes in each release.
https://github.com/gluster/glusterfs/tree/release-3.7/doc/release-notes
On Wed, Nov 16, 2016 at 11:49 AM, songxin wrote:
> Hi,
> I am planning to migrate from gluster 3.7.6 to gluster 3.7.10.
> So I have two questions below.
> 1.How
or zfs and do you have and SSDs in there?
>
> On 9 November 2016 at 06:17, Serkan Çoban <cobanser...@gmail.com> wrote:
>>
>> Hi, I am using 26x8TB disks per server. There are 60 servers in gluster
>> cluster.
>> Each disk is a brick and configuration is 16+4 E
Hi, I am using 26x8TB disks per server. There are 60 servers in gluster cluster.
Each disk is a brick and configuration is 16+4 EC, 9PB single volume.
Clients are using fuse mounts.
Even with 1-2K files in a directory, ls from clients takes ~60 secs.
So If you are sensitive to metadata operations,
1 - 100 of 197 matches
Mail list logo