Re: [Gluster-users] Ip based peer probe volume create error

2018-02-07 Thread Lindsay Mathieson
On 8/02/2018 4:45 AM, Gaurav Yadav wrote: After seeing command history, I could see that you have 3 nodes, and firstly you are peer probing 51.15.90.60  and 163.172.151.120 from  51.15.77.14 So here itself you have 3 node cluster, after all this you are going on node 2 and again peer probing

Re: [Gluster-users] Integration of GPU with glusterfs

2018-01-11 Thread Lindsay Mathieson
. Nvidia has banned their GPU's being used in Data Centers now to, I imagine they are planning to add a licensing fee. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster

Re: [Gluster-users] Gluster 3.8.13 data corruption

2017-10-05 Thread Lindsay Mathieson
Any chance of a backup you could do bit compare with? Sent from my Windows 10 phone From: Mahdi Adnan Sent: Friday, 6 October 2017 12:26 PM To: gluster-users@gluster.org Subject: [Gluster-users] Gluster 3.8.13 data corruption Hi, We're running Gluster 3.8.13 replica 2 (SSDs), it's used as

Re: [Gluster-users] Performance drop from 3.8 to 3.10

2017-09-22 Thread Lindsay Mathieson
On 22/09/2017 1:21 PM, Krutika Dhananjay wrote: Could you disable cluster.eager-lock and try again? Thanks, but didn't seem to make any difference. Can't test anymore at the moment as down a server that hung on reboot :( -- Lindsay Mathieson

[Gluster-users] Performance drop from 3.8 to 3.10

2017-09-21 Thread Lindsay Mathieson
: on performance.low-prio-threads: 32 user.cifs: off performance.flush-behind: on server.event-threads: 4 client.event-threads: 4 server.allow-insecure: on -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org

Re: [Gluster-users] 3.8 Upgrade to 3.10

2017-08-25 Thread Lindsay Mathieson
everything. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.8 Upgrade to 3.10

2017-08-25 Thread Lindsay Mathieson
I need to offline the volume first? Thanks -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-25 Thread Lindsay Mathieson
on the redundancy :) For me, glusters biggest problem is its lack of flexibility in adding bricks and nodes.And replacing them is an exercise in nail biting.  Hoping V4 improves on this though maybe that will lead to performance trade offs. -- Lindsay Mathieson

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-25 Thread Lindsay Mathieson
on the redundancy :) For me, glusters biggest problem is its lack of flexibility in adding bricks and nodes. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Gluster 4.0: Update

2017-08-24 Thread Lindsay Mathieson
> > This feature (and the patent) is from facebook folks. > > Does that mean its not a problem? > -- Lindsay ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 4.0: Update

2017-08-24 Thread Lindsay Mathieson
I did a quick google to see what Haolo Replication was - nice feature, very useful. Unfortunately I also found this: https://www.google.com/patents/US20160028806 >Halo based file system replication >US 20160028806 A1 Is this an issue? On 25 August 2017 at 10:33, Amar Tumballi

Re: [Gluster-users] I need a sanity check.

2017-07-04 Thread Lindsay Mathieson
Arbiter brick is what you need Sent from my Windows 10 phone From: Ernie Dunbar Sent: Wednesday, 5 July 2017 4:28 AM To: Gluster-users Subject: [Gluster-users] I need a sanity check. Hi everyone! I need a sanity check on our Server Quorum Ratio settings to ensure the maximum uptime for our

Re: [Gluster-users] Debian 3.8.12 packages have been updated?

2017-06-20 Thread Lindsay Mathieson
releases. The BZ related to this change is [3] Thanks Kalen -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance

2017-06-20 Thread Lindsay Mathieson
Have you tried with: performance.strict-o-direct : off performance.strict-write-ordering : off They can be changed dynamically. On 20 June 2017 at 17:21, Sahina Bose wrote: > [Adding gluster-users] > > On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot wrote: >

Re: [Gluster-users] Debian 3.8.12 packages have been updated?

2017-06-19 Thread Lindsay Mathieson
On 18/06/2017 12:47 PM, Lindsay Mathieson wrote: I installed 3.8.12 a while back and the packages seem to have been updated since (2017-06-13), prompting me for updates. I haven't seen any release announcements or notes on this though. Bump - new versions are 3.8.12-2. Just curious

Re: [Gluster-users] Debian 3.8.12 packages have been updated?

2017-06-17 Thread Lindsay Mathieson
On 18/06/2017 12:47 PM, Lindsay Mathieson wrote: I installed 3.8.12 a while back and the packages seem to have been updated since (2017-06-13), prompting me for updates. I haven't seen any release announcements or notes on this though. nb: New debian version is 3.8.12-2 -- Lindsay

[Gluster-users] Debian 3.8.12 packages have been updated?

2017-06-17 Thread Lindsay Mathieson
I installed 3.8.12 a while back and the packages seem to have been updated since (2017-06-13), prompting me for updates. I haven't seen any release announcements or notes on this though. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-12 Thread Lindsay Mathieson
On 13 June 2017 at 11:15, Atin Mukherjee wrote: > This looks like a bug in the error code as the error message is wrong. > I'll take a look at it and get back. > I had a thought (they do happen) and tried some further testing. root@gh1:~# gluster peer status Number of

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-12 Thread Lindsay Mathieson
On 13 June 2017 at 02:56, Pranith Kumar Karampuri wrote: > We can also do "gluster peer detach force right? Just to be sure I setup a test 3 node vm gluster cluster :) then shut down one of the nodes and tried to remove it. root@gh1:~# gluster peer status Number of

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-12 Thread Lindsay Mathieson
On 13/06/2017 2:56 AM, Pranith Kumar Karampuri wrote: We can also do "gluster peer detach force right? Tried that, didn't work - threw an error. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluste

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-11 Thread Lindsay Mathieson
On 9/06/2017 5:54 PM, Lindsay Mathieson wrote: I've started the process as above, seems to be going ok - cluster is going to be unusable for the next couple of days. Just as an update - I was mistaken in this, cluster was actually quite usable while this was going on, except for on the new

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-11 Thread Lindsay Mathieson
ekend. Thanks for all the help, much appreciated. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-11 Thread Lindsay Mathieson
sconnected)** * Hostname: vnb.proxmox.softlog Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 State: Peer in Cluster (Connected) Do I just: rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7 On all the live nodes and restart glusterdd? nothing else? thanks. -- Lindsay

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-11 Thread Lindsay Mathieson
is removed is *after* it has died. Thanks. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-10 Thread Lindsay Mathieson
On 11/06/2017 10:57 AM, Lindsay Mathieson wrote: I did a |gluster volume set all cluster.server-quorum-ratio 51%| And that has resolved my issue for now as it allows two servers to form a quorum.| | Edit :) Actually |gluster volume set all cluster.server-quorum-ratio 50

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-10 Thread Lindsay Mathieson
On 11/06/2017 9:38 AM, Lindsay Mathieson wrote: Since my node died on friday I have a dead peer (vna) that needs to be removed. I had major issues this morning that I haven't resolve yet with all VM's going offline when I rebooted a node which I *hope * was due to quorum issues as I now

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-10 Thread Lindsay Mathieson
IMHO. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-10 Thread Lindsay Mathieson
-f2e1462cfc36 State: Peer in Cluster (Connected) Hostname: vnh.proxmox.softlog Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 State: Peer in Cluster (Connected) -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http

[Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-10 Thread Lindsay Mathieson
. Confidence level is not high. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Lindsay Mathieson
e4 I think that should work perfectly fine yes, either that or directly use replace-brick ? Yes, this should be replace-brick Was there any problem in doing the way I did? -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@g

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-09 Thread Lindsay Mathieson
On 9 June 2017 at 17:12, wrote: > Heh, on that, did you think to take a look at the Media_Wearout indicator ? > I recently learned that existed, and it explained A LOT. > Yah, that has been useful in the past for journal/cache ssd's that get a lot of writes. However all

Re: [Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-08 Thread Lindsay Mathieson
On 9 June 2017 at 10:51, Lindsay Mathieson <lindsay.mathie...@gmail.com> wrote: > Or I should say we *had* a 3 node cluster, one node died today. Boot SSD failed, definitely a reinstall from scratch. And a big thanks (*not*) to the smart reporting which showed no issues at all. --

[Gluster-users] Urgent :) Procedure for replacing Gluster Node on 3.8.12

2017-06-08 Thread Lindsay Mathieson
Status: We have a 3 node gluster cluster (proxmox based) - gluster 3.8.12 - Replica 3 - VM Hosting Only - Sharded Storage Or I should say we *had* a 3 node cluster, one node died today. Possibly I can recover it, in whcih case no issues, we just let it heal itself. For now its running happily on

Re: [Gluster-users] VM going down

2017-05-11 Thread Lindsay Mathieson
On 11/05/2017 9:51 AM, Alessandro Briosi wrote: On one it reports about Leaked clusters but I don't think this might cause the problem (or not?) Should be fine -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http

Re: [Gluster-users] VM going down

2017-05-10 Thread Lindsay Mathieson
On 11/05/2017 9:51 AM, Alessandro Briosi wrote: On one it reports about Leaked clusters but I don't think this might cause the problem (or not?) Should be fine -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http

Re: [Gluster-users] VM going down

2017-05-09 Thread Lindsay Mathieson
"qemu-img check" against it? -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] www.JamesCoyle.net hijacked?

2017-05-05 Thread Lindsay Mathieson
James Colye has written a lot of blog entries on gluster and they are often referenced here, in fact I check his site out recently. But now its got an invalid ssl cert for cloud.skizzip.com, the content is gone and it just got a logon page for "VESTA" -- Lindsay

Re: [Gluster-users] How to fix heal-failed

2017-05-02 Thread Lindsay Mathieson
. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS 3.8.10 is available

2017-03-20 Thread Lindsay Mathieson
and will test after that. -- Lindsay Mathieson ___ Announce mailing list annou...@gluster.org http://lists.gluster.org/mailman/listinfo/announce ___ Gluster-users mailing list Gluster-users@gluster.org http

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-17 Thread Lindsay Mathieson
, this was supposed to be fixed in 3.8.10 -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is it safe to run RAID0 on a replicate cluster?

2017-03-01 Thread Lindsay Mathieson
how well it would perform with that number of files though. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] bit-rot resolution

2017-02-28 Thread Lindsay Mathieson
On 1 March 2017 at 17:15, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > Why not automate this or adding a command that does heal automatically > when corruption is found? > > Nobody wants corrupted files on the storage, why you don't heal > automatically? > Probably because

Re: [Gluster-users] Is it safe to run RAID0 on a replicate cluster?

2017-02-28 Thread Lindsay Mathieson
On 1 March 2017 at 09:20, Ernie Dunbar wrote: > Every node in the Gluster array has their RAID array configured as RAID5, > so I'd like to improve the performance on each node by changing that to > RAID0 instead. Hi Ernie, sorry saw your question before and meant to

Re: [Gluster-users] Gluster and ZFS: how much RAM?

2017-02-23 Thread Lindsay Mathieson
On 23/02/2017 9:49 PM, Gandalf Corvotempesta wrote: Anyway, is possible to use the same ZIL partition for multiple bricks/zfs vdevs ? I presume you mean slog rather than zil :) The slog is per pool and applies to all vdevs in the pool. -- Lindsay Mathieson

Re: [Gluster-users] Gluster and ZFS: how much RAM?

2017-02-23 Thread Lindsay Mathieson
should be fine with 16GB For most people, running KVM VM's over zfs direct or via gluster over zfs, the l2arc doesn't seem to be much use, very poor hit ratio, less than 6% -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users

Re: [Gluster-users] gluster and multipath

2017-01-27 Thread Lindsay Mathieson
On 27/01/2017 8:54 PM, Alessandro Briosi wrote: I use sharding with 65MB shards, it makes for very fast efficient heals. Just one brick per node, but each brick is 4 disks in ZFS Raid 10 with a fast SSD log device. And oops, I meant 64MB shards -- Lindsay Mathieson

Re: [Gluster-users] gluster and multipath

2017-01-27 Thread Lindsay Mathieson
On 27/01/2017 8:54 PM, Alessandro Briosi wrote: Proxmox comes with gluster 3.5.2 which does not have arbiter options. Which version are you using? Is it 3.8 stable? 3.8.8, I use the gluster repo. -- Lindsay Mathieson ___ Gluster-users mailing list

Re: [Gluster-users] possible kernel panic with glusterd

2017-01-25 Thread Lindsay Mathieson
Could the oom killer be kicking in? is this a repeatable issue? -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and multipath

2017-01-24 Thread Lindsay Mathieson
cluster.granular-entry-heal: yes cluster.locking-scheme: granular -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and multipath

2017-01-24 Thread Lindsay Mathieson
it would work with LACP though. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster and multipath

2017-01-24 Thread Lindsay Mathieson
and 2 through the others. That should keep working if a switch goes down. Which then is multipath:) I could also use something like keepalived for the master IP to switch between the interfaces. though I'd like multipath more. No need for either. Cheers, -- Lindsay Mathieson

Re: [Gluster-users] nfs service not detected

2017-01-23 Thread Lindsay Mathieson
On 23/01/2017 7:48 PM, Matthew Ma 馬耀堂 (奧圖碼) wrote: I touch a file under /mnt/dev in sgnfs-ser2 , but there is nth in sgnfs-ser1. Don't access files directly on the bricks, rather via the gluster fuse mount, which unifies the access. -- Lindsay Mathieson

Re: [Gluster-users] ZFS + GlusterFS raid5 low read performance

2017-01-23 Thread Lindsay Mathieson
On 23/01/2017 7:45 PM, Yann Maupu wrote: I updated both the recordsize to 256K on all nodes ZFS Record size? -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High-availability with KVM?

2017-01-20 Thread Lindsay Mathieson
ely I suspect this would need some sort of meta server. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High-availability with KVM?

2017-01-20 Thread Lindsay Mathieson
On 21/01/2017 8:12 AM, Ziemowit Pierzycki wrote: Can I not add a brick that is already used in volume just a different location? I don't think so, I believe you have to earse it, add the brick and let a full heal complete. I'd wait for confirmation from a dev first though. -- Lindsay

Re: [Gluster-users] Convert to Shard - Setting Guidance

2017-01-20 Thread Lindsay Mathieson
individual shards. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Convert to Shard - Setting Guidance

2017-01-20 Thread Lindsay Mathieson
On 21/01/2017 12:57 AM, Kevin Lemonnier wrote: It will only affect new files, so you'll need to copy the current images to new names. That's why I was speaking about livre migrating the disk to the same volume, that's how I did it last year. Good to know, thanks Kevin. -- Lindsay Mathieson

Re: [Gluster-users] Convert to Shard - Setting Guidance

2017-01-20 Thread Lindsay Mathieson
sizes. One question - how do you plan to convert the VM's? - setup a new volume and copy the VM images to that? - or change the shard setting inplace? (I don't think that would work) -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users

Re: [Gluster-users] Convert to Shard - Setting Guidance

2017-01-20 Thread Lindsay Mathieson
th large VM images completes orders of magnitude faster and consumes far less bandwidth/cpu/disk IO -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Vm migration between diff clusters

2017-01-18 Thread Lindsay Mathieson
ll find that VM automaticaly ?? > > regs. > > > > > On 01/19/2017 12:34 AM, Lindsay Mathieson wrote: > > On 19/01/2017 9:11 AM, p...@email.cz wrote: > > how can I migrate VM between two different clusters with gluster FS ? ( > 3.5 x 4.0 ) > They have different o

Re: [Gluster-users] Vm migration between diff clusters

2017-01-18 Thread Lindsay Mathieson
. You can take the VM down and copy the image, but I imagine it would be very slow. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] An other Gluster 3.8 Long-Term-Maintenance update with the 3.8.8 release

2017-01-17 Thread Lindsay Mathieson
On 16 January 2017 at 03:35, Niels de Vos wrote: > The Gluster team has been busy over the end-of-year holidays and this > latest update to the 3.8 [9]Long-Term-Maintenance release intends to fix > quite a number of bugs. Packages have been built for [10]many different >

Re: [Gluster-users] [Gluster-devel] GlusterFS 3.7.19 released

2017-01-11 Thread Lindsay Mathieson
On 11/01/2017 8:13 PM, Kaushal M wrote: [1]:https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.18.md Is there a reason that "performance.strict-o-direct=on" needs to be set for VM Hosting? -- Lindsay Mathieson _

Re: [Gluster-users] Replace replicate in single brick

2017-01-10 Thread Lindsay Mathieson
a completely unavailable, missing brick replica? I guess I will find out, since I could put the old disk back in with the errors if it must exist. I would be setting up a test volume with the same config first, putting in some data, then testing the procedure. Then check your data. -- Lindsay

Re: [Gluster-users] performance.flush-behind

2017-01-08 Thread Lindsay Mathieson
Thanks Vijay, good to know. On 9 January 2017 at 09:59, Vijay Bellur <vbel...@redhat.com> wrote: > > > On Sun, Jan 8, 2017 at 5:29 PM, Lindsay Mathieson > <lindsay.mathie...@gmail.com> wrote: >> >> On 8/01/2017 10:38 AM, Lindsay Mathieson wrote: >

Re: [Gluster-users] performance.flush-behind

2017-01-08 Thread Lindsay Mathieson
On 8/01/2017 10:38 AM, Lindsay Mathieson wrote: Option: performance.flush-behind Default Value: on Description: If this option is set ON, instructs write-behind translator to perform flush in background, by returning success (or any errors, if any of previous writes were failed

[Gluster-users] performance.flush-behind

2017-01-07 Thread Lindsay Mathieson
filesystem. Does this mean that essentially fsync is disabled by default? -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Cheers and some thoughts

2017-01-05 Thread Lindsay Mathieson
On 6 January 2017 at 08:42, Michael Watters wrote: > Have you done comparisons against Lustre? From what I've seen Lustre > performance is 2x faster than a replicated gluster volume. No Lustre packages for debian and I really dislike installing from src for production

[Gluster-users] Cheers and some thoughts

2017-01-04 Thread Lindsay Mathieson
this could be fitted to the existing gluster, but is there potential for this sort of thing in Gluster 4.0? I read the design docs and they look ambitious. Cheers, -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http

Re: [Gluster-users] Gluster performance advice

2017-01-03 Thread Lindsay Mathieson
of storage or are you looking for data redundancy? or both? could you post the "gluster volume info" output? -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] On Gluster resiliency

2016-12-24 Thread Lindsay Mathieson
a server down for 24 hours and yes, no loss of data and nobody noticed. Once restored the server healed up in a couple of hours. Gluster 3.8.7 Cheers everyone. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http

[Gluster-users] 3.8.7 debian

2016-12-20 Thread Lindsay Mathieson
(there were issues with 3.8.5). Big thanks to the devs, cheers and happy holidays/new year etc :) -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Announcing Gluster 3.9

2016-11-28 Thread Lindsay Mathieson
/upgrade_to_3.9/ Good to see this clearly documented, the rolling upgrade is pretty much what I have done in the past. -- Lindsay Mathieson ___ Announce mailing list annou...@gluster.org http://www.gluster.org/mailman/listinfo/announce

Re: [Gluster-users] How to shrink replicated volume from 3 to 2 nodes?

2016-11-27 Thread Lindsay Mathieson
at 12:29 PM, Lindsay Mathieson <lindsay.mathie...@gmail.com <mailto:lindsay.mathie...@gmail.com>> wrote: On 27/11/2016 7:28 PM, Alexandr Porunov wrote: # Above command showed success but in reality brick is still in the cluster. What makes you think thi

Re: [Gluster-users] How to shrink replicated volume from 3 to 2 nodes?

2016-11-27 Thread Lindsay Mathieson
On 27/11/2016 7:28 PM, Alexandr Porunov wrote: # Above command showed success but in reality brick is still in the cluster. What makes you think this? what does a "gluster v gv0" show? -- Lindsay Mathieson ___ Gluster-users mailing li

Re: [Gluster-users] Unable to start/deploy VMs after Qemu/Gluster upgrade to 2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1

2016-11-25 Thread Lindsay Mathieson
On 26/11/2016 1:47 AM, Lindsay Mathieson wrote: On 26/11/2016 1:11 AM, Martin Toth wrote: Qemu is fromhttps://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 Why? Proxmox qemu already has gluster support built in. Ooops, sorry, wrong list - thought this was the proxmox list

Re: [Gluster-users] Unable to start/deploy VMs after Qemu/Gluster upgrade to 2.0.0+dfsg-2ubuntu1.28glusterfs3.7.17trusty1

2016-11-25 Thread Lindsay Mathieson
On 26/11/2016 1:11 AM, Martin Toth wrote: Qemu is fromhttps://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.7 Why? Proxmox qemu already has gluster support built in. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users

[Gluster-users] Can a 3.8 fuse client mount a 3.9 volume?

2016-11-19 Thread Lindsay Mathieson
4-linux-gnu/libpthread.so.0(+0x80a4) [0x7f0cb43640a4] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5576024f3a65] -->/usr/sbin/glusterfs(cleanup_and_exit+0x57) [0x5576024f38d7] ) 0-: received signum (15), shutting down [2016-11-20 01:37:02.576484] I [fuse-bridge.c:5793:fini] 0-fuse:

Re: [Gluster-users] corruption using gluster and iSCSI with LIO

2016-11-17 Thread Lindsay Mathieson
On 18/11/2016 6:00 AM, Olivier Lambert wrote: First off, thanks for this great product:) I have a corruption issue when using Glusterfs with LIO iSCSI target: Could you post the results of: gluster volume info gluster volume status thnaks -- Lindsay Mathieson

Re: [Gluster-users] Comparison with other SDS

2016-11-14 Thread Lindsay Mathieson
. It would be interesting to know if thats a optimization gluster does. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Feature Request: Lock Volume Settings

2016-11-13 Thread Lindsay Mathieson
As discussed recently, it is way to easy to make destructive changes to a volume,e.g change shard size. This can corrupt the data with no warnings and its all to easy to make a typo or access the wrong volume when doing 3am maintenance ... So I'd like to suggest something like the following:

Re: [Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

2016-11-12 Thread Lindsay Mathieson
and no warning at all. gluster volume reset *finger twitch* And boom! volume gone. Feature request: Ability to *lock* volume settings -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman

Re: [Gluster-users] qemu2.7 with gluster 2.8

2016-11-11 Thread Lindsay Mathieson
involving file descriptors and gfapi access, which is being worked on. Semi-Related: Latest qemu (2.7.0-6) from pve-nosubscription is broken for snapshots. Unrelated to Gluster. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org

Re: [Gluster-users] Looking for use cases / opinions

2016-11-08 Thread Lindsay Mathieson
On 8/11/2016 11:43 PM, Lindsay Mathieson wrote: Are you looking at replication (2 or 3)/disperse or pure disperse? I need to go to bed ... I meant Distributed/Replicated :) or Distributed/Disperse -- Lindsay Mathieson ___ Gluster-users mailing

Re: [Gluster-users] Looking for use cases / opinions

2016-11-08 Thread Lindsay Mathieson
On 8/11/2016 11:38 PM, Thomas Wakefield wrote: High Performance Computing, we have a small cluster on campus of about 50 linux compute servers. D'oh! I should have thought of that. Are you looking at replication (2 or 3)/disperse or pure disperse? -- Lindsay Mathieson

Re: [Gluster-users] Looking for use cases / opinions

2016-11-08 Thread Lindsay Mathieson
On 8/11/2016 9:58 PM, Thomas Wakefield wrote: Still looking for use cases and opinions for Gluster in an education / HPC environment. Thanks. Sorry, whats a HPC environment? -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users

Re: [Gluster-users] Block storage with Qemu-Tcmu

2016-11-07 Thread Lindsay Mathieson
On 7 November 2016 at 17:01, Prasanna Kalever wrote: > Yet another approach to achieve Gluster Block Storage is with Qemu-Tcmu. Thanks Prasanna, interesting reading. >From a quick scan, there doesn't seem to be any particular advantage over qemu using gfapi directly? Is

Re: [Gluster-users] Improving IOPS

2016-11-05 Thread Lindsay Mathieson
, better use of ZIl and SLOG. Lower memory requirements. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Improving IOPS

2016-11-05 Thread Lindsay Mathieson
only uses zlib (no lz4) -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Improving IOPS

2016-11-05 Thread Lindsay Mathieson
. Yah, I get that. For me willing to risk loosing the entire gluster node and having to resync it, I see the odds as pretty low vs just losing one disk in the RAID10 set and resilvering it locally. -- Lindsay Mathieson ___ Gluster-users mailing list

Re: [Gluster-users] Improving IOPS

2016-11-05 Thread Lindsay Mathieson
slog is only ever needed to be read form in the event of failure before the zil is committed. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Improving IOPS

2016-11-05 Thread Lindsay Mathieson
, half a dozen of the other. With RAIDZ10 I can lose one disk, maybe two without losing the brick. Replacing the disk is very easy and fast. Resilvering one disk in a raid10 set is a lot quicker and less impact than resyncing a brick via the network. -- Lindsay Mathieson

Re: [Gluster-users] Improving IOPS

2016-11-05 Thread Lindsay Mathieson
writes and 1-1.5GB/s reads, although the pair of 850s I’m caching on probably max out around 1.2GB/s. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Improving IOPS

2016-11-03 Thread Lindsay Mathieson
On 4 November 2016 at 14:35, Krutika Dhananjay wrote: > It will be available in 3.9 (and latest > upstream master too) if you're interested to try it out but > DO NOT use it in production yet. It may have some stability > issues as it hasn't been thoroughly tested. > > You

Re: [Gluster-users] Improving IOPS

2016-11-03 Thread Lindsay Mathieson
On 4 November 2016 at 03:38, Gambit15 wrote: > There are lots of factors involved. Can you describe your setup & use case a > little more? Replica 3 Cluster. Individual Bricks are RAIDZ10 (zfs) that can manage 450 MB/s write, 1.2GB/s Read. - 2 * 1GB Bond, Balance-alb -

Re: [Gluster-users] Shared Heal Times

2016-11-02 Thread Lindsay Mathieson
cluster.granular-entry-heal: yes cluster.locking-scheme: granular -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Improving IOPS

2016-11-01 Thread Lindsay Mathieson
* 1G). -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Shared Heal Times

2016-11-01 Thread Lindsay Mathieson
ever works. thanks, -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Performance

2016-10-31 Thread Lindsay Mathieson
production, but would like to see the results. -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] ZFS, ZIL and ARC

2016-10-30 Thread Lindsay Mathieson
. and: arc_summary.py -p 2 for stats on your ARC cache and arc_summary.py -p 3 for your l2arc cache -- Lindsay Mathieson ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

  1   2   3   4   5   6   >