Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-27 0:14 GMT+02:00 Joe Julian : > For now I can say that gluster performs better and has a much better > worst-case resolution. If everything else goes to hell, I have disks with > files on them that I can recover on a laptop if I have to. Totally agree. > Of course when you ask the Inkta

Re: [Gluster-users] Gfapi memleaks, all versions

2016-10-26 Thread Pranith Kumar Karampuri
+Prasanna Prasanna changed qemu code to reuse the glfs object for adding multiple disks from same volume using refcounting. So the memory usage went down from 2GB to 200MB in the case he targetted. Wondering if the same can be done for this case too. Prasanna could you let us know if we can use r

Re: [Gluster-users] bitrot log messages

2016-10-26 Thread Kotresh Hiremath Ravishankar
Hi Jackie, Here is the sample output of scrub status where two files are corrupted. root@FEDORA2:$gluster vol bitrot master scrub status Volume name : master State of scrub: Active (Idle) Scrub impact: lazy Scrub frequency: biweekly Bitrot error log location: /var/log/glusterfs/bitd.log Scr

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 03:42 PM, Lindsay Mathieson wrote: On 27/10/2016 8:14 AM, Joe Julian wrote: To be fair, though, I can't blame ceph. We had a cascading hardware failure with those storage trays. Even still, if it had been gluster - I would have had files on disks. Ouch :( In that regard how

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 8:14 AM, Joe Julian wrote: To be fair, though, I can't blame ceph. We had a cascading hardware failure with those storage trays. Even still, if it had been gluster - I would have had files on disks. Ouch :( In that regard how do you view sharding? why not as simple as pulling

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:54 PM, Lindsay Mathieson wrote: Maybe a controversial question (and hopefully not trolling), but any particularly reason you choose gluster over ceph for these larger setups Joe? For myself, gluster is much easier to manage and provides better performance on my small non-ente

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
Maybe a controversial question (and hopefully not trolling), but any particularly reason you choose gluster over ceph for these larger setups Joe? For myself, gluster is much easier to manage and provides better performance on my small non-enterprise setup, plus it plays nice with zfs. But I

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:12 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:07 GMT+02:00 Joe Julian : And yes, they can fail, but 20TB is small enough to heal pretty quickly. 20TB small enough to build quickly? On which network? Gluster doesn't have a dedicated cluster network, if the cluster is being

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:13 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:09 GMT+02:00 Joe Julian : Open Compute WiWynn Knox trays. I don't recommend them but they are pretty. https://goo.gl/photos/tmkRE58xKKaWKdL96 What are you hosting on that huge cluster? 10GB network I suppose. Nothing, anymore. W

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 23:09 GMT+02:00 Joe Julian : > Open Compute WiWynn Knox trays. I don't recommend them but they are pretty. > https://goo.gl/photos/tmkRE58xKKaWKdL96 What are you hosting on that huge cluster? 10GB network I suppose. ___ Gluster-users mailing l

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 23:07 GMT+02:00 Joe Julian : > And yes, they can fail, but 20TB is small enough to heal pretty quickly. 20TB small enough to build quickly? On which network? Gluster doesn't have a dedicated cluster network, if the cluster is being hevily accessed, the healing will slow down everything

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:06 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:04 GMT+02:00 Joe Julian : I just add enough disks to saturate (and I don't like zfs, personally) per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and create 6 bricks per server. 30 disks per server? which ch

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:06 PM, Gandalf Corvotempesta wrote: 2016-10-26 23:04 GMT+02:00 Joe Julian : I just add enough disks to saturate (and I don't like zfs, personally) per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and create 6 bricks per server. 30 disks per server? which ch

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:04 PM, Joe Julian wrote: On 10/26/2016 02:02 PM, Gandalf Corvotempesta wrote: 2016-10-26 22:59 GMT+02:00 Joe Julian : Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use replicate to meet my availability requirements (typically rep

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 23:04 GMT+02:00 Joe Julian : > I just add enough disks to saturate (and I don't like zfs, personally) > per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and > create 6 bricks per server. 30 disks per server? which chassis are you using? Why don't you like ZFS? I adm

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:59 AM, Joe Julian wrote: Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use replicate to meet my availability requirements (typically replica 3). My network is the limiting factor already :( Only 1G * 3 Bond. Cheap and nasty D-Link

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 02:02 PM, Gandalf Corvotempesta wrote: 2016-10-26 22:59 GMT+02:00 Joe Julian : Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use replicate to meet my availability requirements (typically replica 3). Isn't the ZFS cache on SSD enough to

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:58 AM, mabi wrote: Sorry yes I meant vmstat, I was doing too much ionice/iostat today ;) right now its averaging at 45000. Low load though. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.glu

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 22:59 GMT+02:00 Joe Julian : > Personally, I prefer raid0 bricks just to get the throughput to saturate my > network, then I use replicate to meet my availability requirements > (typically replica 3). Isn't the ZFS cache on SSD enough to saturate the network? I'll use replica 3, but I d

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:56 AM, Gandalf Corvotempesta wrote: Velociraptors: are still around ? I heard that were EOL a couple of years ago. Legacy hardware :) I must admit they last really well. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-use

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Joe Julian
On 10/26/2016 01:56 PM, Gandalf Corvotempesta wrote: 2016-10-26 22:31 GMT+02:00 Lindsay Mathieson : Yah, RAID10. - Two nodes with 4 WD 3TB RED I really hate RAID10. Personally, I prefer raid0 bricks just to get the throughput to saturate my network, then I use replicate to meet my availabil

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread mabi
Sorry yes I meant vmstat, I was doing too much ionice/iostat today ;) Original Message Subject: Re: [Gluster-users] Production cluster planning Local Time: October 26, 2016 10:56 PM UTC Time: October 26, 2016 8:56 PM From: lindsay.mathie...@gmail.com To: gluster-users@glus

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-26 22:31 GMT+02:00 Lindsay Mathieson : > Yah, RAID10. > > - Two nodes with 4 WD 3TB RED I really hate RAID10. I'm evaluating 2 RAIZ2 on each gluster node (12 disks: 6+6 on each RAIDZ2) or one huge RAIDZ3 with 12 disks. The biggest drawback with RAIDZ is that is impossible to add disks on a

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 6:35 AM, mabi wrote: I was wondering with your setup you mention, how high are your context switches? I mean what is your typical average context switch and what are your highest context switch peeks (as seen in iostat). Wouldn't that be vmstat? -- Lindsay Mathieson ___

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Lindsay Mathieson
On 27/10/2016 3:53 AM, Gandalf Corvotempesta wrote: Are you using any ZFS RAID on your servers? Yah, RAID10. - Two nodes with 4 WD 3TB RED - One node with 2 WD 3TB reds and 6 * 600MB SAS Velocirtprs High Endurance High Speed SSD's for SLOG devices on each node. Std SSD's don't cut it, I've

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread mabi
I was wondering with your setup you mention, how high are your context switches? I mean what is your typical average context switch and what are your highest context switch peeks (as seen in iostat). Best, M. Original Message Subject: Re: [Gluster-users] Production clus

Re: [Gluster-users] Production cluster planning

2016-10-26 Thread Gandalf Corvotempesta
2016-10-05 23:48 GMT+02:00 Lindsay Mathieson : > Its enough? I also run 10 windows VM's per node. > > > My servers typically run at 4-6% max ioload. They idle under 1% Are you using any ZFS RAID on your servers? ___ Gluster-users mailing list Gluster-use

Re: [Gluster-users] Please help

2016-10-26 Thread Nithya Balachandran
On 26 October 2016 at 19:47, Leung, Alex (398C) wrote: > Does anyone has any idea to troubleshoot the following problem? > > > > Alex > > > Can you please provide the gluster client logs (in /var/log/glusterfs) and the gluster volume info? Regards, Nithya > > > [root@*pdsimg-6* alex]# rsync -a

Re: [Gluster-users] bitrot log messages

2016-10-26 Thread Jackie Tung
That’s great, thank you! We plan to automatically look for these signs. For “scrub status”, do you have a sample output for a positive hit you could share easily? > On Oct 26, 2016, at 12:05 AM, Kotresh Hiremath Ravishankar > wrote: > > Correcting the command..I had missed 'scrub' keyword. >

[Gluster-users] Please help

2016-10-26 Thread Leung, Alex (398C)
Does anyone has any idea to troubleshoot the following problem? Alex [root@pdsimg-6 alex]# rsync -av pdsraid1:/export/pdsdata1/mro/safed/rsds/09000_0_ops-120210/crism/ops_crism_vnir_09000_0/ . root@pdsraid1's password: receiving incremental file list ./ 4A_03_B42000_01.DAT 4A_03_0

[Gluster-users] Weekly community meeting - 26-Oct-2016

2016-10-26 Thread Kaushal M
Hi all! This weeks meeting was GD. We got rid of the regular updates, and just had an open floor. This had the intended effect of more conversations. We discussed 3 main topics today, - How do we recognize contributors and their contributions [manikandan] - What's happening with memory manag

Re: [Gluster-users] bitrot log messages

2016-10-26 Thread Kotresh Hiremath Ravishankar
Correcting the command..I had missed 'scrub' keyword. "gluster vol bitrot scrub status" Thanks and Regards, Kotresh H R - Original Message - > From: "Kotresh Hiremath Ravishankar" > To: "Jackie Tung" > Cc: gluster-users@gluster.org > Sent: Wednesday, October 26, 2016 10:53:50 AM > Sub